Home ethereum.org

Ethereum Classic

Ethereum Classic is an open, decentralized, and permissionless public blockchain, that aims to fulfill the original promise of Ethereum, as a platform where smart contracts are free from third-party interference. ETC prioritizes trust-minimization, network security, and integrity. All network upgrades are non-contentious with the aim to fix critical issues or to add value with newly proposed features; never to create new tokens, or to bail out flawed smart contracts and their interest groups.
[link]

Cryptocurrency News & Discussion

The official source for CryptoCurrency News, Discussion & Analysis.
[link]

I have had my new ethereum linux miner up for over 6 hours and not sure when you get the complete block ???

I keep seeing stuff fly across the screen only I don't know if it ever stops. here is a screenshot . My next step is to mine some coins, only I don't know if my laptop can do it. Help! By the way, is my ethernet wallet what gets printed out when I type "geth account list" ?
submitted by elgreco390 to EtherMining [link] [comments]

My opinion on installing Ethereum clients in Linux.

It's hell. Please fix it, right now, it's only for ultra-geeks.
submitted by trancephorm to ethereum [link] [comments]

I wrote a complete how-to guide to stake Ethereum and now I'm looking for your feedback

Hi all! It took almost two weeks of research, fiddling, writing and reviewing but my complete guide to staking on Ethereum is now ready. I tried to be as thorough as possible, targeting enthusiasts who my not be into Linux or Ethereum yet but still want to try this.
My guide is here: Come fare staking con Ethereum 2.0 (English autotranslation here: Staking Ethereum how to ).
I've done the whole procedure multiple times on Medalla, but I won't be able to do it on mainnet because I don't own the 32 ETH needed, and I don't have even remotely the spare fiat to get them.
Now I'm looking for your feedback: do you spot any error? Is something unclear? Can something be done in an easier way? For the reason above, I'm very much interested in the mainnet option, which I cannot test on my own.
Let me know, thanks!
submitted by Zane_TLI to ethstaker [link] [comments]

I set up a staking node and if I can, you can too [AMA]

Hey there fellow Ethereum community member.
Exactly 4 weeks ago, I decided to invest time and some of my decaying-fiat into setting together with a staking node on Medalla (ETH 2.0 Testnet) and see if it would give me enough confidence to actually become a genesis validator.
My level of comfort is pretty much similar to the average joe and I consider myself pretty much a novice in this domain. I do however have a software engineer background.
As we're all excited about the upcoming December 1st launch date, I'm sure you might have a lot of questions, and I'll be more than happy to answers if I can.
FYI I first tried to set up a node on the cloud (AWS) as I figured it would make it easier to manage and maintain online. I quickly decided against it after my bill was at 50$/7 days.
Here's the setup I settled for (~total of 800$, but above the minimum required):
- Intel NUCi5beh - 2TB SSD Nvme - 32GB RAM - UPS Backup battery
OS: Arch Linux (painful experience) Eth1: Geth Eth2: Prysm (Beacon,Validator,Slasher) & Teku(Beacon,Validator) Monitoring: Prometheus Dashboard: Grafana Alert: Slack & Email via Grafana
Running the nodes are the easy part, monitoring & securing your infrastructure was the tedious portion. It might seem intimidating and definitely requires a lot of time as online guides are not that mainstream yet, but it is an amazing learning experience and once you get your hands dirty, it is actually not that bad!
TL;DR: Novice who set up his own staking infra, if I can do it, you can too, AMA!
submitted by fabdarice to ethtrader [link] [comments]

RX 5700 XT sudden low hashrate

Hi,
One month ago, i've purchased an Red Devil RX 5700 XT for ethash mining. At the beginning, i've got the hashrate of whattomine.com give for estimation, at about 50-52mh/s, depending of computer usage, temperature, etc. I'm using Windows 10, and this is my main computer.
After i've done a bios mod (many youtube video and web how-to available), then i go up to 54-55mh/s. All where fine, until last week. It began after i've installed manjaro linux as second operating system. When i mine whit linux, i first go up to 54mh/s, after few second stay at 35mh/s. In Windows, first start at full speed, then go down to about 42-46mh/s. I've tried whit the bios mod, and without (dual bios card) and the same issue occur. I've an RX580 too in this computer, and the hashrate remain stable at 32.5Mh/s (undervolt)
Here what i've done as troubleshooting:
-DDU the graphic driver (May 2020), then reinstall it. I've tried the recommended version of September whit the same method (ddu first), then the most recent one of October. All the driver do the same issue.
-Tried several overclock setting (undervolting, overclock memory to 1790mhz instead of 1750, go to stock settings too). I use Overdriventool for that.
-Reflash the bios (original one)
-Try auto-tune in Poenixminer. Same hashrate whit Ethminer and Claymore.
-Temperature stay at average 60 degree celsius (core). Tmem stay at 104. The card draw 200w when start mining, then go down to 100w after few minut, and then, the hashrate drop. Notice that i've been this comportment before the hashrate issue happen. The card go down to 100w, but still give the good hashrate.
-The psu is a corsair of 1000w. The computer draw 500w at the wall whit CPU mining (monero) and GPU mining (ethereum, RX580+5700XT).
Now, i've in average 42mh from my 5700xt. At this hashrate, i could have a better performance whit 2x rx580 that will worth less. I know AMD have many driver issues, but it doesn't seem to be that, as it do the same issue on linux, wich the driver as more stable.
In gaming the card work very well, go to 180w of powerusage, sometime more depending of which game i play. I've random BSOD caused by the bad driver.
I've search on the web before asking and nothing of what i've found help me to resolve this issue. I know the hashrate can vary, but now, i lost 13mh/s for the same power consumption, so the profitability is affected.
Thank you very much for your help!
submitted by sirdardares to EtherMining [link] [comments]

A slightly updated look at hardware for staking.

Some notes before we begin:
· The ideal set up, and best practice is to have a dedicated computer for staking. Try to limit additional processes running on your staking box. Especially if it is something that is connecting to the outside world.
· Use Linux! It's easy, I promise. For the foreseeable future Linux will receive better support from the client teams. It is light weight, stable, secure, and it doesn't force you to restart for updates every other day.
· You are locking up at least 11000 dollars in this endeavor, probably for 1-2 years. No one knows how much that 11000 could turn in to during that time period (I think a lot). It makes sense to buy some good hardware. It will pay for itself relatively quickly.
· A battery back up is strongly recommended! Plug your modem and router in to it also. My ISP has generators to support emergency services communications, meaning the internet continues to work during a power outage as long as my equipment is powered. Your ISP may be the same. Aside from black-outs, not having your computer shut down on every momentary power flicker is very valuable.
Raspberry Pi 4 8GB
Price: $104.80 (including case, power supply, SD card, and heat sinks).
Performance: While running a node and validator on a Raspberry Pi 4 is possible under good conditions, the CPU starts to show its weaknesses when the network is struggling. Add in the additional load of an Ethereum 1 node, and the Pi just doesn’t have the horsepower to be a reliable staking machine in phase 0, and beyond phase 0, they will likely not be able to keep up at all. I do not recommend staking on a Pi.
Power Usage: Approximately 8 watts. This would cost about 76 cents a month to run at 13c/kwh.
My opinion: I would not recommend purchasing a RPI4 for staking.
Old laptop/desktop
Price: Free! Well, kind of anyways.
CPU: Going by Prysmatic’s recommended minimum requirement of an Intel i5-760, a CPU with a passmark score above 2500 is necessary. However, their recommended specs include a CPU that scores 7075. For staking on main net, I would strongly recommend a CPU that is at least in the 6000s or better.
Memory: Unless you go with an extremely bare bones OS, 16gb is the minimum RAM I would recommend for main net. My staking machine typically sits at about 7.5-7.9gb used total which is too close for comfort to 8gb in my opinion.
Storage: An SSD is required. Pretty much any SSD should work fine. Buying one with a high terabytes written spec will help with longevity. While you could get by with a 512GB SSD for a little while, buying a 1TB or 2TB SSD will be a better move long term.
Caveats: Stability and uptime are essential to maximize your profits. If you are using an older desktop consider replacing the PSU and the fans. Buying a titanium or platinum rated PSU will help save on the monthly power bill as well.
If you are planning on staking with an older laptop, consider that they have reduced capacity to deal with heat due to their form factor, and in rare cases, running while plugged in 24/7 can cause issues with the battery. If you do choose to stake with a laptop, I would recommend using one that far exceeds the CPU requirements as running a laptop at nearly full load 24/7 is not advisable. You will probably be fine, but generally speaking laptops are not designed with that in mind.
New laptop
If you are buying brand new, I do not see any value in paying the price premium for a portable form factor, screen, keyboard, and trackpad. Once you get your staking machine set up, you do not need any of these features, you can just remote into the staking machine from your daily driver computer. The low profile form factor will actually be a downside when taking thermal performance in to account. Laptops typically do not include an ethernet port now, which means you will be relying on WiFi. WiFi is very reliable now, but you can't beat the simplicity and reliability of a cable.
New pre-built desktop.
Price: Around 500-600 dollars. There are likely better deals out there than the one I linked.
Performance: This will reliably and competently run nearly any amount of validator accounts. The CPU scores 6667 on passmark. It has a 1TB SSD, and 16GB of ram. Any other prebuilt desktop with similar specs will work just as well. Shop around for one you like.
Power Usage: Probably around 30 watts. That is $2.85 per month at 13c/kwh.
My opinion: This is a great option. Also, it is 11" x 10" x 4". Much smaller than the old fashioned desktop cases, and ATX mid tower cases most of us are probably familiar with.
Custom built desktop
I won't go too in-depth here because this is essentially the same as using a prebuilt desktop. However, building your own gives you the option of choosing a case you like the look of, and buying higher quality parts. For those of you who have never built a computer, I assure you it is easier than Lego because they only go together one way. Also, you won’t get any weird proprietary parts that will be difficult to replace should they ever fail. Unfortunately with prebuilt computers, concessions are sometimes made with components like the PSU to assuage the accountants and boost margins.
Style points for adding a RAID card!
NUC/mini PC/dapp node
Price: $389.99 plus an SSD and 16 or 32GB of memory.
Performance: The one I linked weighs in at a mighty 8394 passmark score, pair that up with 16GB of memory and it will run a node and more validators than Vitalik could spin up without breaking a sweat.
Power usage: 20-25ish watts. Around 2 dollars a month.
My opinion: NUCs are super cute, and their small form factor gives them a very high significant-other approval factor. Unfortunately that does come with a bit of a price premium. I'm going to argue that you should buy a server below, but honestly this is probably a more realistic option for most people.
Server
One option, or a more modern option. You really need to look around for deals when it comes to this. Usedservers.com charges a premium for the convenience and customization they offer. If you search through ebay, or even better your local classifieds you can often find some gear that someone paid a large pile of money to get for a few hundred bucks.
Performance: Generally speaking, no matter what you buy, as long as it isn’t totally ancient, performance will not be an issue. The two options I linked above can be configured to the cheapest option and it will still be overkill.
Power Usage: It's bad. My server runs around 100 watts, but it is pretty modern. If you get an older one, expect to be up around 150 watts. That's 10-14 dollars a month.
My opinion: This is my favorite option. Enterprise servers are jam packed with features, and are specifically designed to do exactly what we are trying to do. Run 24/7/365. They have redundant power supplies in case one breaks, they mostly have 2 CPUs, so in the unlikely event of one going bad, you can pop it out and restart with just one. They have built in RAID cards so you can have redundant storage. They have hot swappable drive trays, so if one of your drives goes bad, you don't even need to shut down. All of the components are high quality and built to last. You also get monitoring and maintenance tools that are not included in consumer gear like iDRAC and iLo. That's where that power usage graph I linked above came from. Neat right? I wouldn't necessarily recommend this option to someone running 1 validator, but if you are running several, the few extra dollars of overhead every month is worth the reliability and performance in my opinion.
Avado
It's a NUC, but expensive. The most expensive one at 1100 USD only rates in at 3349 on passmark. They have their own OS which might have a really great UX, I don't know, but it likely is not worth the price of admission. Dappnode is another option if you are looking for a custom built OS with an easy UX. A Dappnode box is just a NUC preconfigured with their software. If you are confident enough to install an OS, you can save a few bucks buying a normal NUC and installing Dappnode yourself. You can also install the Dappnode OS on any computer. If not, buying a Dappnode box is a convenient and simple way to get started.
Virtual Private Server
Price: I looked over the different provider's websites and it looks to be anywhere from 20-40 dollars a month.
Performance: You can buy as much as you can afford.
My opinion: If you live somewhere that is prone to natural disaster or has an unstable power grid or internet connection but still want to stake, this is a good option. If you do have stable power and internet, running your own hardware will be a cheapemore profitable solution long term. You need to evaluate the pros/cons of this for your own situation. Remember that if one of the VPS providers goes down, it will mean all of the people using that VPS service to host will also go down, and the inactivity penalties will be much larger than if you have uncorrelated down time yourself.
submitted by LamboshiNakaghini to ethstaker [link] [comments]

How can I mine Ethereum with a 4 GB GPU['s for just a little longer?

I was mining Ethereum with 5 x RX 580 4GB GPU's on Nanopool using Nanominer.
Several days ago the DAG file became too big for the cards to handle so I want to switch to another algorithm. However I have 0.197 ETH currently in my account and the payout minimum is set at 0.2
Is there anyway I can mine for just a little longer so I can have my 0.2 ETH paid out?
I read somewhere that Windows uses up some of the GPU VRAM, is there a way to reduce that amount?
I am operating the miner from far away using TeamViewer so I can not install a linux-based OS or use the BIOS.
Any ideas on what I could do?
Thanks in advance :)
submitted by Random3014 to EtherMining [link] [comments]

Lighthouse validator on Medalla - public key showing no results on BeaconScan

Thanks in advance to anyone that can help me with this issue...
I've gotten to "Step 4 — Put validator stake with the Medalla" on this guide - https://medium.com/coinmonks/how-to-setup-ethereum-2-0-validator-node-lighthouse-meddala-goerli-4f0b85d5c8f
I can see in the `systemd` logs for `lighthousevalidator.service` that the validator key was loaded successfully (sorry for the vague terminology I'm still pretty new to this) :
Oct 24 23:19:16 Eth2 lighthouse[1032]: Oct 24 23:19:16.139 INFO Configured for testnet name: medalla
Oct 24 23:19:16 Eth2 lighthouse[1032]: Oct 24 23:19:16.141 INFO Starting validator client datadir: "/valib/lighthouse/validator", beacon_node: http://localhost:5052/
Oct 24 23:19:16 Eth2 lighthouse[1032]: Oct 24 23:19:16.151 INFO Completed validator discovery new_validators: 0
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.405 INFO Enabled validator voting_pubkey: 0xb140ad82e5549e327cf305e37ac8fda844224d1772a453bf12be62fd7827c40c7d827c
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.406 INFO Initialized validators enabled: 1, disabled: 0
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.453 INFO Connected to beacon node version: Lighthouse/v0.2.13-56ffe91f/x86_64-linux
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.460 INFO Genesis has already occurred seconds_ago: 7035549
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.474 INFO Loaded validator keypair store voting_validators: 1
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.485 INFO Block production service started service: block
Oct 24 23:19:17 Eth2 lighthouse[1032]: Oct 24 23:19:17.495 INFO Attestation production service started next_update_millis: 2504, service: attestation

The voting public key seems to agree with the one installed into my lighthouse validator data dir, although strangely cut off:

[email protected]:~# cat /valib/lighthouse/validatovalidator_definitions.yml
---
- enabled: true
voting_public_key: "0xb140ad82e5549e327cf305e37ac8fda844224d1772a453bf12be62fd7827c40c7d827c2a9ea91813df3023ff875f508b"

Should I be able to find my validator when I search for this voting_public_key on https://beaconscan.com/validators ?

Sorry if this is a super clueless question, just trying to get to the point where I'm confident I can launch the real thing when it launches.
submitted by SystemShock86 to ethstaker [link] [comments]

I wrote a complete how-to guide to stake Ethereum and now I'm looking for your feedback

Hi all! It took almost two weeks of research, fiddling, writing and reviewing but my complete guide to staking on Ethereum is now ready. I tried to be as thorough as possible, targeting enthusiasts who my not be into Linux or Ethereum yet but still want to try this.
My guide is here: Come fare staking con Ethereum 2.0 (English autotranslation here: Staking Ethereum how to ).
I've done the whole procedure multiple times on Medalla, but I won't be able to do it on mainnet because I don't own the 32 ETH needed, and I don't have even remotely the spare fiat to get them.
Now I'm looking for your feedback: do you spot any error? Is something unclear? Can something be done in an easier way? For the reason above, I'm very much interested in the mainnet option, which I cannot test on my own.
Let me know, thanks!
submitted by Zane_TLI to ethereum [link] [comments]

Guide for full node on AWS with reasonable amount of time/money/headaches

My home machine and Internet connection isn't sufficient to keep a full node running in the background. Like many in this predicament, I decided to provision the necessary hardware in the cloud. In this case AWS. I wanted to avoid spending too much money, not spend forever getting the node synced, and have a reproducible process that isn't too convoluted.
After fumbling around the past few days, here's the step-by-step process that I've settled on. I'm sure it's obvious for those of you with cloud expertise. But for the rest of us I haven't seen any good practical guides on bootstrapping a full-node on AWS in a way that's reasonable in terms of time, money and complexity. As always, YMMV depending on your situation. I don't claim this is the best approach, but it is a better approach than any I've seen so far.
The underlying dilemma is that syncing the chain is really compute, memory, IO, and bandwidth intensive. Whereas running an already synced chain is pretty cheap. The gist of the process is to exploit that by syncing on a beefy EC2 instance, then move the node over to a small, cheap EC2 instance.
The whole process can be done with about 20 minutes of effort, and 15 hours of unnattended syncing. The upfront AWS cost for the syncing is about $5-8. The running cost of the node is about $55/month. But you can start/stop it and save 75% of that rate for the time you don't use.
Step 0: Pick a single availability zone that you want to run the node in. In the following steps, make sure everything you provision is in this zone. (I'll use us-east-1f for the remainder of this guide)
Step 1: Navigate to the EC2 instances console. Launch an Amazon Linux x64 i3.xlarge instance with Amazon linux x64. Use the default volume size and settings. At $0.30/hour, this is a decently expensive instance, but it will sync fast. Especially because it has direct attached storage.
Step 2: After the instance boots, ssh into the machine. Download and install Geth to the home directory using the following commands:
$ wget https://gethstore.blob.core.windows.net/builds/geth-linux-amd64-1.9.23-8c2f2715.tar.gz $ tar -xzf geth-linux-amd64-1.9.23-8c2f2715.tar.gz $ mv geth-linux-amd64-1.9.23-8c2f2715/geth ~/ 
Step 3: Mount the direct attached storage using the following commands
$ sudo mkdir /mnt/nvm/ $ sudo mkfs -t ext4 /dev/nvm0n1 $ sudo mount -t ext4 /dev/nvm0n1 /mnt/nvm $ sudo mkdir /mnt/nvm/ether $ sudo chown ec2-user:ec2-user /mnt/nvm/ether 
Step 4: Start syncing Geth with the following command $ ./geth --datadir /mnt/nvm/ether --syncmode=fast --maxpeers=100 --cache=28000
This step will take about 12-18 hours. In terms of money it will cost you about $5 for the EC2 time. It uses a lot of download bandwidth, but AWS doesn't charge for incoming data. You can check determine when the sync is done either by looking at the console for when the "Imported new chain segment" lines stop having an age= field. Or you can use Geth console:
$ ./geth --datadir attach > eth.syncing 
If it says false, syncing is done. If not it gives you a JSON object telling how many blocks it has left. Note that geth quickly catch up to just 64 blocks behind the last block, but take a long time to finish. This is a known behavior, and it doesn't mean your syncing is broken.
Step 5: When syncing is done, navigate to the EC2 console and open the EBS values tab. Create a new volume with "General Purpose SSD (gp2)" type. Make sure it's in the same availability zone as the running EC2 instance. Size it to be at least as large as the chain data plus another 50GB to leave room for growth. At the time of writing that number was 325GB, but you can check by running df -h on the EC2 instance and looking at the "/mnt/nvm" line.
Step 6: Attach the newly created volume to the EC2 instance. Then format and mount it:
$ sudo mkdir /mnt/export/ $ sudo mkfs -t ext4 /dev/xvdf $ sudo mount -t ext4 /dev/xvdf/ /mnt/export/ $ sudo mkdir /mnt/export/ether $ sudo chown ec2-user:ec2-user /mnt/export/ether 
Step 7: Copy the chain from the direct attached storage to the EBS volume:
$ cp -r /mnt/nvm/ethe* /mnt/export/ethe 
This will take about an hour or two.
Step 8: Unmount and detach the EBS volume. Run the following command:
$ sudo umount /mnt/export 
Then navigate to the EBS console page, select the volume and click "detach".
Step 9: Start the cheap EC2 instance that you'll use from here on out to run the node. The t4g.medium instance type is the cheapest option that works. Launch with Amazon Linux ARM64.
Step 10: ssh into the new instance and install geth to the home directory:
$ wget https://gethstore.blob.core.windows.net/builds/geth-linux-arm64-1.9.23-8c2f2715.tar.gz $ tar -xzf geth-linux-arm64-1.9.23-8c2f2715.tar.gz $ mv geth-linux-arm64-1.9.23-8c2f2715/geth ~/ 
Step 10: Navigate to the EBS console page, find the volume from before, then click Attack. Select the t4g.medium instance that we just launched.
Step 11: Mount the EBS volume on the new instance.
$ sudo mkdir /mnt/ebs/ $ sudo mount -t ext4 /dev/sdf /mnt/ebs/ 
Step 12: Launch the full node:
$ ./geth --datadir /mnt/ebs/ethe --syncmode=fast 
You'll have a small amount of syncing to do from when you stopped running the i3.xlarge node. This gap-sync should complete in 5 minutes. You can check the status the same way you did in step 4. Once that's done, congratulations you have a full node running!
Step 13 (very important): Navigate to the EC2 console and terminate the i3.xlarge instance. Don't forget this step (even if you abandon the process in the middle) or else you'll keep paying money for the expensive i3.xlarge.
Epilogue: From here you have a lot of options. AWS is very reliable, so you can just keep running the node (probably with nohup so you don't have to stay logged in.) You can use it for a wallet on your home computer, mobile device, or anywhere else. Just set up the AWS security group for JSON-RPC on port 8545.
If you're not using it, you stop and start the instance. A stopped instance doesn't pay the EC2 fees ($25/month). You can also snapshot the EBS volume, delete the volume, then clone it again when you want to use it. That cuts the EBS fees from $32/month to $16/month.
When you restart, you'll have to gap sync. So if it's been a long-time, I suggest gap-syncing first with a i3.xlarge instance to catch up quickly, then repeat the above process to move the chain over to a cheap t4g.medium instance. If you want redundancy, you can also clone the EBS volume and run it on multiple instances, possibly in separate regions or availability zones.
Finally, if you want to get fancy you could probably cut the running cost down to $20/month. About half the cost is the EC2 instance and half is the EBS disk space. For the former, spot instances are about 80% cheaper, but subject to being randomly killed. Which means, you need an automated system for re-launching the node on the new instance and mounting the volume. Kubernetes with a persistent EBS volume would work here. In terms of reducing storage, the simplest fix is probably switching to OpenEthereum in warp mode, which should only be 100GB instead of 300GB.
submitted by CPlusPlusDeveloper to ethereum [link] [comments]

Reflections on setting up a validator as a genuine complete noob

So this weekend I set myself a challenge of getting a medalla testnet validator up and running. The good news is, I did! The less-good news is, it took me the best part of five days.
One of the things that I found most difficult was that in spite of the many excellent guides that have been written on this so far, none of them (purely in my own experience) have been comprehensive from the perspective of a 100% full noob. None of the guides I followed I was able to make work without issues being encountered of some kind (albeit some minor). This is more a reflection on my own ability, so I'm making this post in the hope that some future expert documentor will take note of some things that werent obvious to me as someone with absolutely no knowledge of Linux or CLIs.
As an analogy, it's akin to reading a recipe where there is an instruction to, for example, 'par boil the potatoes' without describing the process itself for people who dont know what that is. How long for? Do the potatoes need to be peeled? Cut into pieces? Do we add salt? Does the water need to be boiling right the way through? How do we know when done? Do the potatoes need to cool? etc etc.
Some reflections on the experience (not of potato boiling):
  1. Hardware setup: I used the hardware recommendations from u/superphiz in this post. I'd never built my own computer before, but found it pretty straightforward thanks to the power of the youtube.
  2. Guides: I then set about following u/maninthecryptosuit 's guide here, but also referred to approx half a dozen other guides that have been posted on this sub in the past week (shoutout to u/metanull-operator's excellent guide in particular). I really appreciate the time taken by each individual in attempts off their own volition to help the community. I would not be here without those efforts.
  3. Operating System: As per superphiz, I chose Ubuntu Server 20.04 as the recommendation to keep the system as light as possible makes sense. This was the first thing to be set up after building the NUC. It was fairly straightforward to download, although there was a small technical hurdle to format my USB stick to suit Linux from the Mac laptop Ubuntu was initially downloaded to. It was not obvious to me that this needed to be done (although it is now!). Again, Youtube, but recommend that this is at least mentioned in future guides as noob. Although the recommendations in most guides seemed to be to use Ubuntu Server, and I did initially set this up, I eventually ended up starting the process again from scratch with Ubuntu desktop. More on that later.
  4. The command line: this was the first time I'd used a command line interface. At first I genuinely didn't know where to start (as in, literally how to access Terminal and what that is). However, once I got in, it was easy to copy and paste commands (I needed to google to learn about the various shortcuts etc). Sometimes I had issues with copy/pasted commands instantly executing when i wasnt expecting it, and sometimes not. And often the commands were split over multiple lines without that being obvious, leading to malformed arguments. One of the main issues I had throughout was not having a solid understanding how directories are structured in Ubuntu, and how to navigate those via command line... forward, back, checking the contents of a directory, etc. Again more google.
  5. Remote access: The next challenge I had was understanding conceptually at a high level what systems I would be using - as in, am I supposed to be doing this whole process on the actual machine that will be running the validator? I'm going to be saying this a lot, but as a total noob, this was not immediately obvious to me.In fact, what eventuated was that I set up Ubuntu desktop on the host machine, SSH'ed in from my Macbook, where I then did most things. Some guides did cover this to a degree... but I encountered some issues where sometimes I would get locked out due to permissions/keys issues. I wont go into those suffice to say I think remote access/SSH is an area which needs more coverage in guides as it seems to be pretty standard way of doing things. I felt quite satisfied getting that aspect working - it felt like some kind of magic.
  6. Setting up the pre-requisites: in all of the guides there were list of prerequisite things that needed to be set up to get validators etc working (things like git, python, rust, etc). I didn't have many issues here as I was simply cut and pasting commands into Terminal. My main issue was that where things did go awry it was not easy to diagnose why and address. Sometimes I would attempt something one way from Guide A, fail, then attempt a similar thing from a different guide, and not get good results because each guide sets up things sightly different and in different directory locations. As such, I found myself usually contained within one guide for the duration and found it difficult to take advice from the others in case of conflicting instructions.
  7. Setting up ETH1 node, beacon chain, and validator.
This was obviously the most difficult step. Issues:
The first challenge I had was that the order of operations across various guides was not consistent. In some guides the ETH1 node was set up first (or sometimes not mentioned), and in some guides not. In some guides validator keys were set up early, some not. That was confusing to me. There wasn't much narrative as to what needs to be set up first/last and in what order - I get that there is flexibility now, but only after having gone through the process. This led to an issue where in some cases I ran into difficulties, then switched to the corresponding instructions from a different guide (eg setting up a beacon node) but because the order of operations was different, certain things that had already been set up in guide 1 had not been in guide 2, causing the (no doubt very accurate) instructions not to work in my case. Very frustrating.
The reason I felt the need to hop between guides is because there was minimal guidance on how to diagnose issues that arose. Without any instruction on the nature of issues, what do do, what commands to execute to diagnose and fix, I felt my best option was to see what other guides were doing on the same topic in the hope that they gave a steer on how to progress. Not ideal. I recognise much of that was driven by my own impatience - I should have been more methodical in attempts to resolve issues in their current state but it is frustrating to get stopped frequently.
Goerli test ETH: some of the methods recommended to get this were better than others. Prysm and Ethstaker discords seemed the easiest way. There were other recommendations about to tap various faucets many times for tiny amounts of ETH which seemed pretty impractical given we need 32, I'm not sure why they were recommended in the first place.
Validator keys: there is a need to generate your validator keys via the ETH2 launchpad (or CLI). Although generating the keys themselves I found straightforward...
It was difficult to understand how to get my keys from the server into the ETH2 launchpad interface, or conversely from my laptop (where my usual metamask account is) to the server and to tell if actions had been successful or not as there is no immediate CLI feedback. I struggled with this for so long that I gave up in the end and switched to Ubuntu Desktop, restarted the entire process from scratch, just so that I could set up a new metamask on the main machine and drag and drop the files. This was most the most difficult aspect for me of the entire setup... copy/pasting and navigating Ubuntu CLI folder structures is not easy for someone with no previous understanding of the CLI. What I really wanted here was a step by step instruction on how to do this, beyond just "generate your keys on launchpad and send them to your validator keys folder."
After that, I found it fairly straightforward to set up the ETH1 node, beacon node, and validator (I chose Prysm) following the documented steps. I did not succeed initially with the client setup instructions from the ETH2 launchpad and ended up going down a black hole where I eventually got Prysm working via VM, but this caused further issues down the line and contributed towards me starting afresh.
My main issues were around the management of the processes once active. It was not obvious to me what was supposed to happen in regards setting these up in separate terminal windows, and whether or not to leave the terminal windows open. In practice, I ended up closing the terminal windows and was then uncertain on a) whether this had stopped the processes from running and b) how to get them back.
So, the areas I found I wanted more detail on in instructions were:
- what are the processes (Terminal outputs) supposed to look like when they are successfully running?
- what are they supposed to look like while they are running but not yet fully synced?
- how can I check up on the status of these from the CLI?
- if I close a terminal window, how can I get back a live view of the processes?
- what has happened to the processes if/when the windows are shut down?
- what happens if my laptop is shut down but the server is still active?
- what happens if BOTH (or just the server) are shut down and how do I get them back?
All told, I'm super happy and appreciative to all of those whom I have drawn on to get my validator up and running. However I'm not certain at this point that I will be setting this up when it goes live, at least not initially. The main reason for that is that even though I know where to go to get help in working through issues (shoutout and props to ethstaker and Prysm discords!), I have next to no ability to diagnose and fix issues myself to the extent that I feel confident enough in what I'm doing to trust 32 ETH to it. Debugging overall was the main stress. I dont think I would do this again without a stress-tested GUI which involves minimal steps/clicks.
Hopefully that was useful to some people who may have been having similar issues.
TLDR; some issues happened that I couldn't work out how to fix and that was frustrating.
Edit: am going to attempt setting up Lighthouse, Grafana and Prometheus using u/SomerEsat's guide here next.
submitted by Coldsnap to ethstaker [link] [comments]

Remote signing with Teku and web3signer

Hi folks, for the "less initiated" here is a quick summary of some relevant discussion my experience with this mode of validator operation.
TL;DR This is not a specific recommendation for this approach. If you just want to play around and see some of the technical implementation (which is fairly straightforward), skip the Background.
Background
Disclaimer: I am not associated in any way with Consensys or the development of this software nor am I expert in either their software or related security issues.
My sense is that that most client teams are moving swiftly towards remote signing functionality but this is the first one I have worked with. Whether this is a good idea for the "small time" individual staker, is really a decision to make on a case-by-case basis for ourselves but hopefully this info will be useful to those interested in exploring this approach.
In the context of the individual staker (I will be talking about this class of staker, rather than large scale or custodial participants), my take on the use of remote signing is as follows: The "standard" or simplest approach for most is to use the eth2.0-deposit-cli tool. This will generate a keystore file containing your validator private keys, encrypted via a password of your choosing. One disadvantage is thus that your password will be saved on disk, along with the very files it was used to encrypt. If (and it's quite a big 'if') an exploit was found in your client software, or other system software for your validating machine (or perhaps you were reckless enough to use your machine in such a way as to infect it with some kind of spyware), the password(s) and encrypted private key(s) could be revealed to, or harvested by, an attacker. I am not an expert in how such attacks would ultimately be executed, but my guess would be that it would be en mass (like by a botnet exploiting a "0-day vulnerability") rather than a targeted (and probably highly specialized) approach. There is also the important question of what an attacker could do with the keys. They are not a particularly attractive target since there is no direct pay-off. However, the individual staker could be in trouble since their stake could be slashed (and possibly held to ransom), and if such credentials were obtained en mass it could wreak havoc with ETH2.0 generally. I have no idea (and I doubt anyone else does) as to a reasonable estimate of the probability of such an eventuality. My guess would be it is pretty low. In which case, why worry about remote signing?
One argument is that, well, this is cutting edge stuff. Certainly, vulnerabilities will be discovered as we move forward. As to how they manifest, who knows, but with upwards of $10k on the line, and enormous headaches if something happened, perhaps it's worth putting in a little extra effort to reduce risk if the overhead is not too high. Another argument comes from the standpoint of physical security. The probability of having your staking hardware stolen is probably a lot higher than a software exploit (assuming you take rasonable precautions and don't use your staking rig for browsing for porn). I'm not talking about a targetted theft either - just some ar$ehole robs your place and takes all the expensive looking tech gear. I guess, even if someone stole your rig(s), they probably wouldn't have a clue what it was for. BUT, so what? If your password(s) and keystore file(s) is out in the wild you ain't gonna sleep well and I wouldn't mind betting that at least some thieves today check their stolen hardware (computers, phones, whatever) for crypto wallets. In this event, you'd almost certainly have to "exit" your validator, which would seriously suck. (Also factor in that over the next 12 to 24 months, or more, Ethereum may become far more widespread - and valuable - and you can expect the interest in obtaining ETH via malicious means to increase). Please note there are other ways of protecting your hardware from theft, such as encrypting the storage so that after it is turned off it cannot be decrypted and accessed by others, but this too brings a certain amount of overhead - like if the power knocks out your validator and you are not around for a while to restart it with the decryption key. (Not a terrible scenario, but probably frustrating).
Regardless, a partial solution to these problems is to use a remote signing service. With the keystore and password on a different machine, an attack against the validator alone will not compromise the keys. Secondly, if the validator is physically stolen, again, "keys are safu". If the "remote" machine is located at the same physical location as the validator, and on the same local network, then it's benefits may be limited. (I am not qualified to answer this!).
If you want to try it out, here's how you can do it with Teku.
Remote Signing with Teku
Disclaimer: The following is not expected to result in a particularly secure setup, rather it illustrates some of the issues when configuring the system.
I have been using Teku for a while, and found it pretty stable. On Medalla, attestation rates are typically high (90 to 100%) and I had no problems joining the Spadina testnet. Built on Java, it is somewhat resource hungry compared to others, but nonetheless seems a solid piece of software. Once you have Java installed, it's very easy to build and run. Remote signing can be implemented using Web3signer. Again, Java based, it is easy to install and run following the online documentation. The idea then is to run web3signer on a "remote" machine. It needs the minimum of processing power and could be a micro-pc like a Raspberry Pi. Personally I would run it as a dedicated machine, locking down all other services and network ports.
Before executing the web3signer process, you require the keystore file, a file containing the password in plain text, and a configuration file (.yaml) for each validator. While there are several different options ranging from raw unencrypted files through to cloud-based services, I focus here on the keystore option, and mention the Hashicorp Vault serve option at the end. The configuration files do not need special names but they sit in a directory that you specify when the process is lauched. They will be parsed automatically by the process.
Example:
1.yaml
type: "file-keystore" keyType: "BLS" keystoreFile: ".json" keystorePasswordFile: "" 
Pretty straightforward.
Execute the process:
./web3signer-0.1.1-SNAPSHOT/bin/web3signer --http-listen-host=192.168.1.xxx --http-listen-port=9003 --http-host-allowlist= --key-store-path=/uslocal/web3signer_config eth2 
The command line arguments are pretty obvious. Note the "--key-store-path" where the configuration files are stored, and that the "--http-host-allowlist" argument is the name of the hosting service, not a list of incoming clients. (I got caught out by this). What is missing here is a tls configuration and I confess I have not got this far yet, although I did implement tls for the Hashicorp Vault server.
Output:
2020-10-10 21:46:41.883+00:00 | main | INFO | Web3SignerApp | Web3Signer has started with args --http-listen-host=192.168.1.139,--http-listen-port=9003,--http-host-allowlist=ethnode-930077ea4.home.net,ethnode-930077ea4,--key-store-path=/uslocal/web3signer_config,eth2 2020-10-10 21:46:41.999+00:00 | main | INFO | Web3SignerApp | Version = web3signev0.1.1-dev-e70f7a40/linux-aarch_64/-ubuntu-openjdk64bitservervm-java-11 Setting logging level to INFO 2020-10-10 21:46:48.170+00:00 | main | INFO | BLS | BLS: loaded Mikuli library 2020-10-10 21:46:53.466+00:00 | main | INFO | Runner | Web3Signer has started, and ready to handle signing requests on 192.168.1.139:9003 
Once this is stable and reports no errors you can go ahead and start Teku:
Start Teku
./bin/teku --network=medalla \ --validators-external-signer-public-keys=0xa8c236f51c3825496f1701cfe69... \ --validators-external-signer-url=http://:9003 \ --rest-api-enabled=true --rest-api-docs-enabled=true --metrics-enabled --metrics-port=8012 
Here I have a single validator and you can see that I point to the remote signing service. (At this time I have not investigated implementing https).
Eventually in your Teku logs you should be seeing (as usual):
01:41:25.211 INFO - Validator *** Published attestation Count: 1, Slot: 485606, Root: ea33cf..bd4e 
Not:
23:29:44.885 ERROR - Validator *** Failed to produce attestation Slot: 484948 tech.pegasys.teku.validator.client.signer.ExternalSignerException: External signer failed to sign and returned invalid response status code: 403 (See log file for full stack trace) 
You may find it easier initially to keep the validator and web3signer processes on the same machine.
Hashicorp Vault
Hashicorp Vault is a serious piece of kit, capable of managing secrets at scale. However, it can be deployed locally if you so wish. The configuration is well beyond the scope of this post but I will say it is an educational process, should you wish to have a go (requires a high endurance threshold). Since it requires an enormous amount of management, it is hard to imagine this would be suitable for an individual staker.
I was able to set up the Vault on the same machine as the web3signer service but for added security you might want it on a third machine (or cloud etc). The configuration is such that validator private keys are stored directly in the vault, so you will need a tool to extract them from your keystore wallet.
This private key decryption tool can be used by running it in the web3signer directory:
java -cp "lib/*" ./DecryptKeystore.java ./.json 
"ethdo" is another tool that will provide the same functionality.
Obviously, revealing the private key under any circumstances should be handled with extreme caution!
Once extracted and entered appropriately into the vault (not covered here), the validator configuration file for web3signer looks like this:
2.yaml
type: "hashicorp" keyType: "BLS" tlsEnabled: "true" keyPath: "/v1/secret1/data/0x8f458e4c0e317972097a5bc9aa04e97abb..." keyName: "value" tlsKnownServersPath: "/uslocal/web3signer_config/knownhosts" serverHost: "192.168.1.xxx" serverPort: "8200" timeout: "10000" token: "s.1eR0qujDPD9OZElIX4dnzdQN" 
Note that the "token" on the last line is an authentication token generated by the Vault. Since this token resides on disk (plain text) it was not clear to me that security is much improved since a curl request to the Vault server when authenticated by this token can readily reveal the prviate key.
At any rate, if you have successfully configured the Hashicorp Vault server and "unseasled" it, you should readily be able to use it to provide the private keys to the web3signer process.
Summary
I found the basic implementation of web3signer pretty straightforward. I did, however, make the mistake of starting with the Hashicorp Vault option, which was a very long winded process. This was a very brief discussion, which may be erroneous, and by all means make suggestions and corrections. Personally I have not decided whether this type of approach is really justified. A general rule is that the more complex the system, the more scope there is for user error. Ultimately with all things crypto, "you pays your money and takes your chances" and it may be that a wall managed single validator sitting behind a decent firewall, located in a discreet back office cupboard, is more than enough security.
Update:
I ran Teku from genesis on the Zinken testnet and I configured it with two validators, both signed remotely with web3signer. One of which used the keystore file / password and one of which used the Hashicorp back end. I'm pleased to say that both validators worked as expected. In fact, they have both successfully proposed blocks and consistently hit 100% attestation rate. I have directly observed one attestation failure due to signer time out but it's not clear why that was. (In fairness, for comparison, Lighthouse v0.3 has performed exactly the same, consistently hitting 100% attestation and proposed blocks.)
submitted by ben-ned to ethstaker [link] [comments]

NVidia – Know What You Own

How many people really understand what they’re buying, especially when it comes to highly specialized hardware companies? Most NVidia investors seem to be relying on a vague idea of how the company should thrive “in the future”, as their GPUs are ostensibly used for Artificial Intelligence, Cloud, holograms, etc. Having been shocked by how this company is represented in the media, I decided to lay out how this business works, doing my part to fight for reality. With what’s been going on in markets, I don’t like my chances but here goes:
Let’s start with…
How does NVDA make money?
NVDA is in the business of semiconductor design. As a simplified image in your head, you can imagine this as designing very detailed and elaborate posters. Their engineers create circuit patterns for printing onto semiconductor wafers. NVDA then pays a semiconductor foundry (the printer – generally TSMC) to create chips with those patterns on them.
Simply put, NVDA’s profits represent the difference between the price at which they can sell those chips, less the cost of printing, and less the cost of paying their engineers to design them.
Notably, after the foundry prints the chips, NVDA also has to pay (I say pay, but really it is more like “sell at a discount to”) their “add-in board” (AIB) partners to stick the chips onto printed circuit boards (what you might imagine as green things with a bunch of capacitors on them). That leads to the final form in which buyers experience the GPU.
What is a GPU?
NVDA designs chips called GPUs (Graphical Processing Units). Initially, GPUs were used for the rapid processing and creation of images, but their use cases have expanded over time. You may be familiar with the CPU (Central Processing Unit). CPUs sit at the core of a computer system, doing most of the calculation, taking orders from the operating system (e.g. Windows, Linux), etc. AMD and Intel make CPUs. GPUs assist the CPU with certain tasks. You can think of the CPU as having a few giant very powerful engines. The GPU has a lot of small much less powerful engines. Sometimes you have to do a lot of really simple tasks that don’t require powerful engines to complete. Here, the act of engaging the powerful engines is a waste of time, as you end up spending most of your time revving them up and revving them down. In that scenario, it helps the CPU to hand that task over to the GPU in order to “accelerate” the completion of the task. The GPU only revs up a small engine for each task, and is able to rev up all the small engines simultaneously to knock out a large number of these simple tasks at the same time. Remember the GPU has lots of engines. The GPU also has an edge in interfacing a lot with memory but let’s not get too technical.
Who uses NVDA’s GPUs?
There are two main broad end markets for NVDA’s GPUs – Gaming and Professional. Let’s dig into each one:
The Gaming Market:
A Bit of Ancient History (Skip if impatient)
GPUs were first heavily used for gaming in arcades. They then made their way to consoles, and finally PCs. NVDA started out in the PC phase of GPU gaming usage. They weren’t the first company in the space, but they made several good moves that ultimately led to a very strong market position. Firstly, they focused on selling into OEMs – guys like the equivalent of today’s DELL/HP/Lenovo – , which allowed a small company to get access to a big market without having to create a lot of relationships. Secondly, they focused on the design aspect of the GPU, and relied on their Asian supply chain to print the chip, to package the chip and to install in on a printed circuit board – the Asian supply chain ended up being the best in semis. But the insight that really let NVDA dominate was noticing that some GPU manufacturers were focusing on keeping hardware-accelerated Transform and Lighting as a Professional GPU feature. As a start-up, with no professional GPU business to disrupt, NVidia decided their best ticket into the big leagues was blowing up the market by including this professional grade feature into their gaming product. It worked – and this was a real masterstroke – the visual and performance improvements were extraordinary. 3DFX, the initial leader in PC gaming GPUs, was vanquished, and importantly it happened when funding markets shut down with the tech bubble bursting and after 3DFX made some large ill-advised acquisitions. Consequently 3DFX, went from hero to zero, and NVDA bought them for a pittance out of bankruptcy, acquiring the best IP portfolio in the industry.
Some more Modern History
This is what NVDA’s pure gaming card revenue looks like over time – NVDA only really broke these out in 2005 (note by pure, this means ex-Tegra revenues):
📷 https://hyperinflation2020.tumblr.com/private/618394577731223552/tumblr_Ikb8g9Cu9sxh2ERno
So what is the history here? Well, back in the late 90s when GPUs were first invented, they were required to play any 3D game. As discussed in the early history above, NVDA landed a hit product to start with early and got a strong burst of growth: revenues of 160M in 1998 went to 1900M in 2002. But then NVDA ran into strong competition from ATI (later purchased and currently owned by AMD). While NVDA’s sales struggled to stay flat from 2002 to 2004, ATI’s doubled from 1Bn to 2Bn. NVDA’s next major win came in 2006, with the 8000 series. ATI was late with a competing product, and NVDA’s sales skyrocketed – as can be seen in the graph above. With ATI being acquired by AMD they were unfocused for some time, and NVDA was able to keep their lead for an extended period. Sales slowed in 2008/2009 but that was due to the GFC – people don’t buy expensive GPU hardware in recessions.
And then we got to 2010 and the tide changed. Growth in desktop PCs ended. Here is a chart from Statista:
📷https://hyperinflation2020.tumblr.com/private/618394674172919808/tumblr_OgCnNwTyqhMhAE9r9
This resulted in two negative secular trends for Nvidia. Firstly, with the decline in popularity of desktop PCs, growth in gaming GPUs faded as well (below is a chart from Jon Peddie). Note that NVDA sells discrete GPUs, aka DT (Desktop) Discrete. Integrated GPUs are mainly made by Intel (these sit on the motherboard or with the CPU).
📷 https://hyperinflation2020.tumblr.com/private/618394688079200256/tumblr_rTtKwOlHPIVUj8e7h
You can see from the chart above that discrete desktop GPU sales are fading faster than integrated GPU sales. This is the other secular trend hurting NVDA’s gaming business. Integrated GPUs are getting better and better, taking over a wider range of tasks that were previously the domain of the discrete GPU. Surprisingly, the most popular eSports game of recent times – Fortnite – only requires Intel HD 4000 graphics – an Integrated GPU from 2012!
So at this point you might go back to NVDA’s gaming sales, and ask the question: What happened in 2015? How is NVDA overcoming these secular trends?
The answer consists of a few parts.Firstly, AMD dropped the ball in 2015. As you can see in this chart, sourced from 3DCenter, AMD market share was halved in 2015, due to a particularly poor product line-up:
📷 https://hyperinflation2020.tumblr.com/private/618394753459994624/tumblr_J7vRw9y0QxMlfm6Xd
Following this, NVDA came out with Pascal in 2016 – a very powerful offering in the mid to high end part of the GPU market. At the same time, AMD was focusing on rebuilding and had no compelling mid or high end offerings. AMD mainly focused on maintaining scale in the very low end. Following that came 2017 and 2018: AMD’s offering was still very poor at the time, but cryptomining drove demand for GPUs to new levels, and AMD’s GPUs were more compelling from a price-performance standpoint for crypto mining initially, perversely leading to AMD gaining share. NVDA quickly remedied that by improving their drivers to better mine crypto, regaining their relative positioning, and profiting in a big way from the crypto boom. Supply that was calibrated to meet gaming demand collided with cryptomining demand and Average Selling Prices of GPUs shot through the roof. Cryptominers bought top of the line GPUs aggressively.
A good way to see changes in crypto demand for GPUs is the mining profitability of Ethereum:
📷 https://hyperinflation2020.tumblr.com/private/618394769378443264/tumblr_cmBtR9gm8T2NI9jmQ
This leads us to where we are today. 2019 saw gaming revenues drop for NVDA. Where are they likely to head?
The secular trends of falling desktop sales along with falling discrete GPU sales have reasserted themselves, as per the Jon Peddie research above. Cryptomining profitability has collapsed.
AMD has come out with a new architecture, NAVI, and the 5700XT – the first Iteration, competes effectively with NVDA in the mid-high end space on a price/performance basis. This is the first real competition from AMD since 2014.
NVDA can see all these trends, and they tried to respond. Firstly, with volumes clearly declining, and likely with a glut of second-hand GPUs that can make their way to gamers over time from the crypto space, NVDA decided to pursue a price over volume strategy. They released their most expensive set of GPUs by far in the latest Turing series. They added a new feature, Ray Tracing, by leveraging the Tensor Cores they had created for Professional uses, hoping to use that as justification for higher prices (more on this in the section on Professional GPUs). Unfortunately for NVDA, gamers have responded quite poorly to Ray Tracing – it caused performance issues, had poor support, poor adoption, and the visual improvements in most cases are not particularly noticeable or relevant.
The last recession led to gaming revenues falling 30%, despite NVDA being in a very strong position at the time vis-à-vis AMD – this time around their position is quickly slipping and it appears that the recession is going to be bigger. Additionally, the shift away from discrete GPUs in gaming continues.
To make matters worse for NVDA, AMD won the slots in both the New Xbox and the New PlayStation, coming out later this year. The performance of just the AMD GPU in those consoles looks to be competitive with NVidia products that currently retail for more than the entire console is likely to cost. Consider that usually you have to pair that NVidia GPU with a bunch of other expensive hardware. The pricing and margin impact of this console cycle on NVDA is likely to be very substantially negative.
It would be prudent to assume a greater than 30% fall in gaming revenues from the very elevated 2019 levels, with likely secular decline to follow.
The Professional Market:
A Bit of Ancient History (again, skip if impatient)
As it turns out, graphical accelerators were first used in the Professional market, long before they were employed for Gaming purposes. The big leader in the space was a company called Silicon Graphics, who sold workstations with custom silicon optimised for graphical processing. Their sales were only $25Mn in 1985, but by 1997 they were doing 3.6Bn in revenue – truly exponential growth. Unfortunately for them, from that point on, discrete GPUs took over, and their highly engineered, customised workstations looked exorbitantly expensive in comparison. Sales sank to 500mn by 2006 and, with no profits in sight, they ended up filing for bankruptcy in 2009. Competition is harsh in the semiconductor industry.
Initially, the Professional market centred on visualisation and design, but it has changed over time. There were a lot of players and lot of nuance, but I am going to focus on more recent times, as they are more relevant to NVidia.
Some More Modern History
NVDA’s Professional business started after its gaming business, but we don’t have revenue disclosures that show exactly when it became relevant. This is what we do have – going back to 2005:
📷 https://hyperinflation2020.tumblr.com/private/618394785029472256/tumblr_fEcYAzdstyh6tqIsI
In the beginning, Professional revenues were focused on the 3D visualisation end of the spectrum, with initial sales going into workstations that were edging out the customised builds made by Silicon Graphics. Fairly quickly, however, GPUs added more and more functionality and started to turn into general parallel data processors rather than being solely optimised towards graphical processing.
As this change took place, people in scientific computing noticed, and started using GPUs to accelerate scientific workloads that involve very parallel computation, such as matrix manipulation. This started at the workstation level, but by 2007 NVDA decided to make a new line-up of Tesla series cards specifically suited to scientific computing. The professional segment now have several points of focus:
  1. GPUs used in workstations for things such as CAD graphical processing (Quadro Line)
  2. GPUs used in workstations for computational workloads such as running engineering simulations (Quadro Line)
  3. GPUs used in workstations for machine learning applications (Quadro line.. but can use gaming cards as well for this)
  4. GPUs used by enterprise customers for high performance computing (such as modelling oil wells) (Tesla Line)
  5. GPUs used by enterprise customers for machine learning projects (Tesla Line)
  6. GPUs used by hyperscalers (mostly for machine learning projects) (Tesla Line)
In more recent times, given the expansion of the Tesla line, NVDA has broken up reporting into Professional Visualisation (Quadro Line) and Datacenter (Tesla Line). Here are the revenue splits since that reporting started:
📷 https://hyperinflation2020.tumblr.com/private/618394798232158208/tumblr_3AdufrCWUFwLgyQw2
📷 https://hyperinflation2020.tumblr.com/private/618394810632601600/tumblr_2jmajktuc0T78Juw7
It is worth stopping here and thinking about the huge increase in sales delivered by the Tesla line. The reason for this huge boom is the sudden increase in interest in numerical techniques for machine learning. Let’s go on a brief detour here to understand what machine learning is, because a lot of people want to hype it but not many want to tell you what it actually is. I have the misfortune of being very familiar with the industry, which prevented me from buying into the hype. Oops – sometimes it really sucks being educated.
What is Machine Learning?
At a very high level, machine learning is all about trying to get some sort of insight out of data. Most of the core techniques used in machine learning were developed a long time ago, in the 1950s and 1960s. The most common machine learning technique, which most people have heard of and may be vaguely familiar with, is called regression analysis. Regression analysis involves fitting a line through a bunch of datapoints. The most common type of regression analysis is called “Ordinary Least Squares” OLS regression, and that type of regression has a “closed form” solution, which means that there is a very simple calculation you can do to fit an OLS regression line to data.
As it happens, fitting a line through points is not only easy to do, it also tends to be the main machine learning technique that people want to use, because it is very intuitive. You can make good sense of what the data is telling you and can understand the machine learning model you are using. Obviously, regression analysis doesn’t require a GPU!
However, there is another consideration in machine learning: if you want to use a regression model, you still need a human to select the data that you want to fit the line through. Also, sometimes the relationship doesn’t look like a line, but rather it might look like a curve. In this case, you need a human to “transform” the data before you fit a line through it in order to make the relationship linear.
So people had another idea here: what if instead of getting a person to select the right data to analyse, and the right model to apply, you could just get a computer to do that? Of course the problem with that is that computers are really stupid. They have no preconceived notion of what data to use or what relationship would make sense, so what they do is TRY EVERYTHING! And everything involves trying a hell of a lot of stuff. And trying a hell of a lot of stuff, most of which is useless garbage, involves a huge amount of computation. People tried this for a while through to the 1980s, decided it was useless, and dropped it… until recently.
What changed? Well we have more data now, and we have a lot more computing power, so we figured lets have another go at it. As it happens, the premier technique for trying a hell of a lot of stuff (99.999% of which is garbage you throw away) is called “Deep Learning”. Deep learning is SUPER computationally intensive, and that computation happens to involve a lot of matrix multiplication. And guess what just happens to have been doing a lot of matrix multiplication? GPUs!
Here is a chart that, for obvious reasons, lines up extremely well with the boom in Tesla GPU sales:
📷 https://hyperinflation2020.tumblr.com/private/618394825774989312/tumblr_IZ3ayFDB0CsGdYVHW
Now we need to realise a few things here. Deep Learning is not some magic silver bullet. There are specific applications where it has proven very useful – primarily areas that have a very large number of very weak relationships between bits of data that sum up into strong relationships. An example of ones of those is Google Translate. On the other hand, in most analytical tasks, it is most useful to have an intuitive understanding of the data and to fit a simple and sensible model to it that is explainable. Deep learning models are not explainable in an intuitive manner. This is not only because they are complicated, but also because their scattershot technique of trying everything leaves a huge amount of garbage inside the model that cancels itself out when calculating the answer, but it is hard to see how it cancels itself out when stepping through it.
Given the quantum of hype on Deep learning and the space in general, many companies are using “Deep Learning”, “Machine Learning” and “AI” as marketing. Not many companies are actually generating significant amounts of tangible value from Deep Learning.
Back to the Competitive Picture
For the Tesla Segment
So NVDA happened to be in the right place at the right time to benefit from the Deep Learning hype. They happened to have a product ready to go and were able to charge a pretty penny for their product. But what happens as we proceed from here?
Firstly, it looks like the hype from Deep Learning has crested, which is not great from a future demand perspective. Not only that, but we really went from people having no GPUs, to people having GPUs. The next phase is people upgrading their old GPUs. It is much harder to sell an upgrade than to make the first sale.
Not only that, but GPUs are not the ideal manifestation of silicon for Deep Learning. NVDA themselves effectively admitted that with their latest iteration in the Datacentre, called Ampere. High Performance Computing, which was the initial use case for Tesla GPUs, was historically all about double precision floating point calculations (FP64). High precision calculations are required for simulations in aerospace/oil & gas/automotive.
NVDA basically sacrificed HPC and shifted further towards Deep Learning with Ampere, announced last Thursday. The FP64 performance of the A100 (the latest Ampere chip) increased a fairly pedestrian 24% from the V100, increasing from 7.8 to 9.7 TF. Not a surprise that NVDA lost El Capitan to AMD, given this shift away from a focus on HPC. Instead, NVDA jacked up their Tensor Cores (i.e. not the GPU cores) and focused very heavily on FP16 computation (a lot less precise than FP64). As it turns out, FP16 is precise enough for Deep Learning, and NVDA recognises that. The future industry standard is likely to be BFloat 16 – the format pioneered by Google, who lead in Deep Learning. Ampere now does 312 TF of BF16, which compares to the 420 TF of Google’s TPU V3 – Google’s Machine Learning specific processor. Not quite up to the 2018 board from Google, but getting better – if they cut out all of the Cuda cores and GPU functionality maybe they could get up to Google’s spec.
And indeed this is the problem for NVDA: when you make a GPU it has a large number of different use cases, and you provide a single product that meets all of these different use cases. That is a very hard thing to do, and explains why it has been difficult for competitors to muscle into the GPU space. On the other hand, when you are making a device that does one thing, such as deep learning, it is a much simpler thing to do. Google managed to do it with no GPU experience and is still ahead of NVDA. It is likely that Intel will be able to enter this space successfully, as they have widely signalled with the Xe.
There is of course the other large negative driver for Deep Learning, and that is the recession we are now in. Demand for GPU instances on Amazon has collapsed across the board, as evidenced by the fall in pricing. The below graph shows one example: this data is for renting out a single Tesla V100 GPU on AWS, which isthe typical thing to do in an early exploratory phase for a Deep Learning model:
📷 https://hyperinflation2020.tumblr.com/private/618396177958944768/tumblr_Q86inWdeCwgeakUvh
With Deep Learning not delivering near-term tangible results, it is the first thing being cut. On their most recent conference call, IBM noted weakness in their cognitive division (AI), and noted weaker sales of their power servers, which is the line that houses Enterprise GPU servers at IBM. Facebook cancelled their AI residencies for this year, and Google pushed theirs out. Even if NVDA can put in a good quarter due to their new product rollout (Ampere), the future is rapidly becoming a very stormy place.
For the Quadro segment
The Quadro segment has been a cash cow for a long time, generating dependable sales and solid margins. AMD just decided to rock the boat a bit. Sensing NVDA’s focus on Deep Learning, AMD seems to be focusing on HPC – the Radeon VII announced recently with a price point of $1899 takes aim at NVDAs most expensive Quadro, the GV100, priced at $8999. It does 6.5 TFLOPS of FP64 Double precision, whereas the GV100 does 7.4 – talk about shaking up a quiet segment.
Pulling things together
Let’s go back to what NVidia fundamentally does – paying their engineers to design chips, getting TSMC to print those chips, and getting board partners in Taiwan to turn them into the final product.
We have seen how a confluence of several pieces of extremely good fortune lined up to increase NVidia’s sales and profits tremendously: first on the Gaming side, weak competition from AMD until 2014, coupled with a great product in form of Pascal in 2016, followed by a huge crypto driven boom in 2017 and 2018, and on the Professional side, a sudden and unexpected increase in interest in Deep Learning driving Tesla demand from 2017-2019 sky high.
It is worth noting what these transient factors have done to margins. When unexpected good things happen to a chip company, sales go up a lot, but there are no costs associated with those sales. Strong demand means that you can sell each chip for a higher price, but no additional design work is required, and you still pay the printer, TSMC, the same amount of money. Consequently NVDA’s margins have gone up substantially: well above their 11.9% long term average to hit a peak of 33.2%, and more recently 26.5%:
📷 https://hyperinflation2020.tumblr.com/private/618396192166100992/tumblr_RiWaD0RLscq4midoP
The question is, what would be a sensible margin going forward? Obviously 33% operating margin would attract a wall of competition and get competed away, which is why they can only be temporary. However, NVidia has shifted to having a greater proportion of its sales coming from non-OEM, and has a greater proportion of its sales coming from Professional rather than gaming. As such, maybe one can be generous and say NVDA can earn an 18% average operating margin over the next cycle. We can sense check these margins, using Intel. Intel has a long term average EBIT margin of about 25%. Intel happens to actually print the chips as well, so they collect a bigger fraction of the final product that they sell. NVDA, since it only does the design aspect, can’t earn a higher EBIT margin than Intel on average over the long term.
Tesla sales have likely gone too far and will moderate from here – perhaps down to a still more than respectable $2bn per year. Gaming resumes the long-term slide in discrete GPUs, which will likely be replaced by integrated GPUs to a greater and greater extent over time. But let’s be generous and say it maintains $3.5 Bn Per year for the add in board, and let’s assume we keep getting $750mn odd of Nintendo Switch revenues(despite that product being past peak of cycle, with Nintendo themselves forecasting a sales decline). Let’s assume AMD struggles to make progress in Quadro, despite undercutting NVDA on price by 75%, with continued revenues at $1200. Add on the other 1.2Bn of Automotive, OEM and IP (I am not even counting the fact that car sales have collapsed and Automotive is likely to be down big), and we would end up with revenues of $8.65 Bn, at an average operating margin of 20% through the cycle that would have $1.75Bn of operating earnings power, and if I say that the recent Mellanox acquisition manages to earn enough to pay for all the interest on NVDAs debt, and I assume a tax rate of 15% we would have around $1.5Bn in Net income.
This company currently has a market capitalisation of $209 Bn. It blows my mind that it trades on 139x what I consider to be fairly generous earnings – earnings that NVidia never even got close to seeing before the confluence of good luck hit them. But what really stuns me is the fact that investors are actually willing to extrapolate this chain of unlikely and positive events into the future.
Shockingly, Intel has a market cap of 245Bn, only 40Bn more than NVDA, but Intel’s sales and profits are 7x higher. And while Intel is facing competition from AMD, it is much more likely to hold onto those sales and profits than NVDA is. These are absolutely stunning valuation disparities.
If I didn’t see NVDA’s price, and I started from first principles and tried to calculate a prudent price for the company I would have estimated a$1.5Bn normalised profit, maybe on a 20x multiple giving them the benefit of the doubt despite heading into a huge recession, and considering the fact that there is not much debt and the company is very well run. That would give you a market cap of $30Bn, and a share price of $49. And it is currently $339. Wow. Obviously I’m short here!
submitted by HyperInflation2020 to stocks [link] [comments]

How to Stake on the ETH2 Medalla Testnet - A Beginner's Guide

Hey everybody, I am an absolute beginner who just managed to set up a staking node on the ETH 2.0 Medalla testnet that goes live Aug 4th, 2020. Shoutout to KBrot/ and others at /ETHFinance/ who patiently helped me out, and of course the friendly folks at the Prysm and Lighthouse discords!
I missed the mining craze last time around, but I'm stoked that I can be here for the Medalla testnet. If you are considering solo-staking on main net you absolutely should give the testnet a try. Documenting my steps here in case anybody else wants to give it a shot!
Initially I tried Lighthouse on Windows but there were compile issues so switched to Linux and it was much easier. I'd never used Linux before and had limited command line knowledge, so if I can do this, so can you!
Later on, I was able to get Prysm running on Windows quite easily. Steps included below.
ETH2 Client used: Sigma Prime Lighthouse on Linux Ubuntu Desktop latest version, not running an ETH1 node (using an Infura node instead as not enough disk space). But you should run an ETH1 node if you can, because you reduce the risk of penalties from Infura being unavailable when you're staking on the main net later on.
I have also added Prysm steps on Windows. For other clients, just replace steps 3-6 with the instructions from that client's dev team.
WARNING: DO NOT USE REAL ETH FOR STAKING ON THE TEST NET. Testnet staking requires TEST ETH called GoETH, NOT REAL ETH.
Steps:
  1. Create your validator keys at the Ethereum Foundation Medalla Launchpad
  2. Preparation: Install Linux, Rust, C++ build packages
  3. Install ETH2 client
  4. Start an ETH2 beacon node
  5. Import your validator keys into ETH2 client
  6. Start ETH2 validator

STEP 1 - Create your validator keys at the Ethereum Foundation Medalla Launchpad
Go to the Launchpad. Make sure you understand the 'Overview' section as much as you can.
If you already have a Linux machine set up and want to use the CLI to generate the keypair, follow the instructions to generate the key-pairs.
I wanted to do it on my Windows PC, so I skipped the 'Install developer libraries' and CLI steps.
Instead I downloaded the eth2deposit-cli-v0.2.1-windows-amd64.zip file from http://github.com/ethereum/eth2.0-deposit-cli/releases/tag/v0.2.1/
Unzip/extract, and run the deposit.exe file. Follow the steps and keep your keystore files and password safe.
I assume you want to set up 1 validator. Upload the validator json, connect your Metamask wallet and sign the transaction to send 32 GoETH from your Metamask wallet to the testnet deposit contract. If you don't have test ETH, get some from prylabs.net/participate (just click on step 2 - get goeth and connect your Metamask) from the Prysmatic or Lighthouse discord. There is a bot channel there.
After signing the transaction your 32 GoETH has been deposited into the Medalla testnet contract!
Brave and Metamask don't work together with the Launchpad. Chrome + Metamask worked for me.

Step 2 - Preparation: Install Linux, Git, Rust, C++ build packages
I was able to run Prysm on Windows, but had issues with Lighthouse. So I set up Lighthouse on Linux. Here's how you do this:
Install Ubuntu desktop using these instructions to create a bootable USB disk. Ubuntu server doesn't have a GUI, so I went for desktop.
From a terminal window, install Ubuntu dependencies by copy-pasting and pressing Enter:
sudo apt install -y git gcc g++ make cmake pkg-config libssl-dev 
Install Git:
sudo apt install git-all 
For using Lighthouse, you need to install Rust:
curl https://sh.rustup.rs -sSf | sh 
For using Lighthouse, you also need to install Microsoft C++ Build Tools:
sudo apt-get install build-essential 
NOTE: You may have to log out/restart Linux at this point to make the next step work.

Step 3: Install an ETH2 Client
Prysm is the more popular client but for the sake of client diversity try to use one of the other clients also. Install Prysm by following the first 3 steps but don't start the beacon node yet.
The steps below are for Lighthouse (Linux), taken from this source.
Clone the lighthouse git with this command:
git clone https://github.com/sigp/lighthouse.git 
Go into the Lighthouse client directory:
cd lighthouse 
Compile the client using command, this will take a while:
make 

Step 4: Start an ETH2 Beacon node
Pick either 4a or 4b below - don't do both!
Step 4a: Start the ETH1 node & Beacon node
If your computer can run an ETH1 node (like GETH) which needs a 500GB SSD at least, please do so to support true decentralization and maximise your node uptime.
See Prysm instructions here. For Lighthouse see instructions here. Then go to step 5.
Step 4b: Use a remote (3rd party) ETH1 node & start a beacon node
If you cannot run an ETH1 node because your computer is not powerful enough or the SSD is not big enough, you can use a public Infura end-point: Sign up for free at https://infura.io/ and create a new project. Under that project's settings, next to 'Endpoints' choose Goerli testnet and copy the https URL -> this is your Infura endpoint.
For Lighthouse (Linux):
Open a new terminal window.
Replace the word URL below with this Infura endpoint URL, and run this command in a new terminal window.
lighthouse --testnet medalla beacon --eth1-endpoint=URL --http 
As a bonus, sign up for the POAP and add your graffiti to your beacon node to get special participation badges!
For lighthouse, the POAP graffiti has to be added to the beacon node not the validator (for Prysm, it is added to the validator not the beacon node - see instruction further below in the next step).
So use this command instead of the previous one to start the beacon node with your graffiti added:
lighthouse --testnet medalla beacon --eth1-endpoint=URL --http --graffiti YOURGRAFFITIHERE 
You should start seeing lines such as:
INFO Imported Deposit Log(s) 
and after it has caught up with all the deposits:
INFO Waiting for adequate ETH1 timestamp.... 
For Prysm (Windows):
If running your own ETH1 node, run in a new command line window:
 prysm.bat beacon-chain 
If using Infura, replace the word URL below with the Infura endpoint URL, and run this command in a new terminal window.
prysm.bat beacon-chain --http-web3provider=URL 
You should see something like this:
INFO powchain: Processing deposits from Ethereum 1 chain deposits=18432 genesisValidators=17871 

Step 5: Import your validator keys into the client
For Lighthouse (Linux):
Follow the instructions here. Ensure you place the validator keys folder in the right place.
I did this by pasting the 'eth2deposit-cli-de03fe3-windows-amd64' folder into my Linux lighthouse folder.
For Prysm (Windows): Follow the steps here. Ensure you place the validator keys folder in the right place.

Step 6: Start your ETH2 validator
For Lighthouse (Linux):
Open a new terminal window and run:
lighthouse vc 
If the validator started successfully, you will see something like this:
INFO Enabled validator voting_pubkey: 0xa5e8702533f6d66422e042a0bf3471ab9b302ce115633fa6fdc5643f804b6b4f1c33baf95f125ec21969a3b1e0dd9e56 
Until the Medalla testnet genesis, you will ALSO see an error like so on Lighthouse:
ERROR Unable to connect to beacon node error: "ReqwestError(reqwest::Error { kind: Request, url: \"http://localhost:5052/node/version\", source: hyper::Error(Connect, ConnectError(\"tcp connect error\", Os { code: 111, kind: ConnectionRefused, message: \"Connection refused\" })) })" 
This is perfectly normal, and will keep repeating until the Medalla testnet chain starts running later (the http server for the lighthouse beacon node doesn't start until genesis - confirmed by devs in Discord)
Note: The status messages may be different as we get closer to chain genesis and again at the actual genesis time.
Before genesis, update your client with this command if built from source:
git pull 
or if using the docker image:
docker pull sigp/lighthouse 
For Prysm (Windows):
Open a new terminal window and run:
prysm.bat validator 
As a bonus, sign up for the POAP and add your graffiti to your validator to get special participation badges! Use this command instead of the previous one to run a validator with your graffiti added:
prysm.bat validator --graffiti "YourGraffitiHere" 
NOTE: While Prysm adds the graffiti to the validator, Lighthouse adds it to the beacon node. The end result is the same though.
You should see this message if the validator started succesfully:
INFO validator: Waiting for beacon chain start log from the ETH 1.0 deposit contract 
Note: The status messages may be different as we get closer to chain genesis and again at the actual genesis time.
Also your beacon node terminal will show that the validator has successfully connected to it.
INFO rpc: New gRPC client connected to beacon node addr=127.0.0.1:XXXXX 
So now you should have one terminal window running the beacon chain and another terminal window running the validator. Closing the terminal windows will terminate these, so be careful. I'd also advise changing your power settings so that your PC doesn't go to sleep automatically.
That's it! Your Medalla validator is now ready! Keep and eye on update instructions from the dev teams on their Discords and just wait for the chain genesis now (1300 hrs UTC, Aug 4th, 2020).
You can also enter your validator's public key in Beaconcha.in to monitor status and staking income.
If you spot any errors/improvements to these steps, do let me know!
EDIT: Prysm steps for Windows added. Lighthouse Graffiti commands added. Prysm and Lighthouse Discord links removed due to Reddit spam filters, look in comments for those links.
submitted by maninthecryptosuit to ethstaker [link] [comments]

Keeping Systemtime Accurate

I'm wondering what arrangements fellow stakers have made when it comes to keeping system time accurate. In a recent update, Prysm no longer manages the systemtime actively with the roughtime clock sync, but alerts the user with an INFO print that local time is drifting. Keeping time is thus the responsibility of the user.
“Time is critical for eth2. Without synchronized time, then network cannot function properly. You can rely on system time, which will invariably drift away. We use Cloudflare’s roughtime as a way to adjust your local clock if it is off.
However, roughtime was off by 4 hours yesterday, which led to chaos. The solution was to not forcibly adjust people’s time based on roughtime but instead log errors telling them their time is off.”
source: https://www.trustnodes.com/2020/08/15/ethereum-2-0-testnet-crashes
My question is, if "system time will invariably drift away", how do you keep your systemtime in check in linux? Do you rely on seeing the warning the beacon chain or validator will print? Do you use ntpd? Are you then not still dependent on third parties like cloudfare?
submitted by ManWomanTVPersonCam to ethstaker [link] [comments]

5700XT or 5600XT 6 GPU Build Query

I am in the process of putting together a 6 GPU Rig, i have been looking at 5700XT and 5600XT as the standard 5700 is just not available anywhere it seems for a reasonable price.
Now my question is which of the 2 cards to choose? 5700xT is going to be more expensive of the 2 setups for initial outlay, does the extra hash on the card offset the higher power costs? The 5600XT seems to be lower hash but also lower power, but does the 5600XT edge the 5700XT on a longish term basis?
My Plan was to mine Ethereum til i pay off the initial rig costs and electricty costs factored in, and then carry on mining and just banking the Eth for now. I am guessing i will be 6+ months to break even, maybe closer to 12 months.
If i go the 5600XT Route the electricity costs are obviously lower but equally hash rate is lower, so is it better to just go for the 5700XT anyhow?
I am trying to factor in resale cost of the cards a little as well, i would say the 5700XT is going to be a bit more desirable than the 5600XT's in the long term.
Another thing is power supply, for the 5700XT i am guessing i will need to use a 1600w Plat rated PSU to power the rig, if i went to 5600XT i guess i could get away with 1200W Plat rated? there is a big cost difference between PSU's when you go from 1200w to 1600w.
The other thing swaying me slightly towards the 5600XT build is overall heat generated on the 5600XT will be a bit less, so a bit more bearable, we are headed into the colder months now though so the 5700XT will actually provide a bit more of a benefit heating the house lol.
As for Motherboards, CPU and Ram, i am guessing 8GB of cheap Ram is sufficient, are the AMD AM4 Athlon chips any good? i could pair that with something like an MSI B450 Pro Motherboard and get 6 GPU.
I have a spare NVME drive i was going to put windows on and run from, or am i better off running from Linux? if so does anyone recommend any particular version?
Any help or thoughts from others would be great appreciated.
submitted by FalcUK to EtherMining [link] [comments]

Discord Invalid Invite and sFTP issues

Hello,
I am setting up my prysm client and I am running into issues so I tried to access the ethstaker discord server by clicking the link in the sidebar to get some advice but all I get is "invalid invite" I am new to discord so not sure what I am doing wrong!
Also on the client set up side of things, I am actually having issues with the with "Copy Deposit Data File" I just can't seem to sftp into my server, SSH works fine. I've googled the issues but quite stumped!
I type
sftp -p [PORTNUMBER] [USER][@](mailto:[email protected])[ServerIP]
And get this response
ssh: connect to host [ RANDOM SERVERIP] port 22: Connection timed out
Connection closed
Connection closed.
Any ideas?
I am using SomerEsat's guide (Cheers!) https://medium.com/@SomerEsat/guide-to-staking-on-ethereum-2-0-ubuntu-medalla-prysm-4d2a86cc637bhowever having trouble with this bit https://www.maketecheasier.com/use-sftp-transfer-files-linux-servers/
============== Solved! Answer below!! ===========
Instead of -p you use -oPort=port_number
sftp -oPort=port_number [email protected]_name

man sftpsftp - secure file transfer programsftp [-1Cv] [-B buffer_size] [-b batchfile] [-F ssh_config] [-o ssh_option] [-P sftp_server_path] [-R num_requests] [-S program] [-s subsystem | sftp_server] host-o ssh_optionCan be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate sftp command-line flag. For example, to specify an alternate port use: sftp -oPort=24. For full details of the options listed below, and their possible values, see ssh_config(5).

submitted by Waving_from_heights to ethstaker [link] [comments]

Crypto brainpower index

TL;DR: Nano wins, 5.79x more brainpower than the next best, Monero.
Following a recent post about Nano marketing, I think it's worth highlighting that amongst the demographic that matters: developers, Nano is marketing itself. The data below highlights the ratio of marketcap to Github repos for the top 13 coins and Nano. A popular thesis is that opensource always wins; e.g. Linux kernal. Therefore, this is a very rough metric to guage how under / over-valued coins are based on how much opensource brainpower they are attracting in the form of Github repos. I recently attempted a small project with Ethereum but gave up in a day because of gas fees and slow transactions. On the other hand, my Nano project is a joy to work on. In terms of methodology, all the Github searches are for 'Bitcoin', 'Ethereum', 'Ripple' (XRP fared even worse), etc PLUS 'cryptocurrency' to allow for the fact that 'Nano' is not a unique identifier for NANO. The lower the ratio number, the better.
Finally, this index could be massively improved upon so if anyone is looking for a project, start building!

submitted by libertant to nanocurrency [link] [comments]

5700XT or 5600XT?

I am in the process of putting together a 6 GPU Rig, i have been looking at 5700XT and 5600XT as the standard 5700 is just not available anywhere it seems for a reasonable price.
Now my question is which of the 2 cards to choose? 5700xT is going to be more expensive of the 2 setups for initial outlay, does the extra hash on the card offset the higher power costs? The 5600XT seems to be lower hash but also lower power, but does the 5600XT edge the 5700XT on a longish term basis?
My Plan was to mine Ethereum til i pay off the initial rig costs and electricty costs factored in, and then carry on mining and just banking the Eth for now. I am guessing i will be 6+ months to break even, maybe closer to 12 months.
If i go the 5600XT Route the electricity costs are obviously lower but equally hash rate is lower, so is it better to just go for the 5700XT anyhow?
I am trying to factor in resale cost of the cards a little as well, i would say the 5700XT is going to be a bit more desirable than the 5600XT's in the long term.
Another thing is power supply, for the 5700XT i am guessing i will need to use a 1600w Plat rated PSU to power the rig, if i went to 5600XT i guess i could get away with 1200W Plat rated? there is a big cost difference between PSU's when you go from 1200w to 1600w.
The other thing swaying me slightly towards the 5600XT build is overall heat generated on the 5600XT will be a bit less, so a bit more bearable, we are headed into the colder months now though so the 5700XT will actually provide a bit more of a benefit heating the house lol.
As for Motherboards, CPU and Ram, i am guessing 8GB of cheap Ram is sufficient, are the AMD AM4 Athlon chips any good? i could pair that with something like an MSI B450 Pro Motherboard and get 6 GPU.
I have a spare NVME drive i was going to put windows on and run from, or am i better off running from Linux? if so does anyone recommend any particular version?
Any help or thoughts from others would be great appreciated.
submitted by FalcUK to gpumining [link] [comments]

VSAN 7.0 upgrade to new/larger drives one host at a time.

I have a relatively substantial home lab, but I need more and faster storage for some of the things I'm playing around with. I'm wondering how horrible it would be for me to upgrade the drives on one host at a time to spread the cost out over a few months.
Current Config:
I want to swap each of the cache drives for an Intel D3-S4510 960GB SSD (SSDSC2KB960G801 1.89DWPD) and each of the capacity drives to Seagate Enterprise Capacity 2TB (ST2000NX0243 4Kn) drives.
While I'd love to be able to do this all in one shot (minding data integrity though), that's like $4K I'd prefer not to spend all in one shot.
How "bad" is it for your nodes to have different VSAN disk capacities? Another question, I know a mix of hybrid storage and all flash is not supported, but is it allowed? I've also considered using that same new cache drive, then just throw a bunch of cheap 2T SSDs behind it as capacity drives. That would cost about the same, so I'd still need to stagger the purchase if possible.
If anyone cares why, I want to stand up a couple Ethereum Archive nodes to do some testing, and that needs about 5TB of very fast storage per node. For full (non-archive) nodes I can mitigate the storage speed by throwing gobs of memory (256GB) at each node and let Linux disk caching work it's magic. Works amazing in that case, but that's not quite sufficient here.
Thank you very much for any feedback you can provide. I'm a network security guy these days, and haven't worked with VMware professionally in 5-6 years. I know enough to be super dangerous, and occasionally helpful but this is out of my depth, and I need input from someone with real practical knowledge.
submitted by bryanether to vmware [link] [comments]

4+ GB DAG benchmark

In preparation for DAG file sizes above 4GB I tried running a benchmark on my RX 580 8GB rig with claymore's "-benchmark" option. After starting the miner I got the following error:
GPU2 - not enough GPU memory to place DAG, you cannot mine this coin with this GPU GPU2 - OpenCL error -61 - cannot allocate big buffer for DAG. Check readme.txt for possible solutions.
When testing I used dag epoch of 400. I am running Linux. Is claymore simply not yet updated? Should I switch to different mining software (phoenixminer, ...)?
EDIT: https://github.com/ethereum-mining/ethmineissues/1966#issuecomment-663826059
This means claymore is not yet updated?
EDIT 2: SUCCESS!!! I compiled ethminer form source and successfully benchmarked at block 20000000, which generates 6.2GB of DAG.
submitted by slole to EtherMining [link] [comments]

On trolls, FUD, idiots a.k.a. the crypto paint-chip brigade and Daedalus Flight vs. Mainnet differences transcript

Real quick video to talk a bit about (Daedalus) Flight versus mainnet. The paint-chip brigade, the people on the internet who like criticizing us with unfounded statements, they've been running around twitter and youtube and reddit and other places saying that we're missing deadlines and of course it's because we're now doing ETC related stuff...
Now what these idiots don't understand or they do understand and they're just being vile people is that we didn't miss any deadlines and I can't even understand where they're coming from. I guess they don't know that there's actually two versions of the Daedalus wallet. So, to make sure that every single person in the space even those who are part of the paint-chip brigade understand this I'm going to explain it here.
We have Daedalus Flight and Daedalus mainnet. Both these wallets work on mainnet Cardano just like Yoroi and Daedalus (mainnet) both work on mainnet Cardano but they're two separate wallets. The primary difference between flight and mainnet is that Flight is for bleeding edge experimental or features that haven't been completely tested yet. Hardware wallet center, voting center, multisig, multi-asset all of these things will come to Daedalus and we are going to release them first on Flight and the people who use Flight accept that while they get features faster than everybody else those features have not been as tested as the features that are deployed on Daedalus mainnet. So, last week we did a release, we released Daedalus version 2.2 with the new node backend version 1.19 which is 50 to 100 times faster for syncing Shelley than Byron.
We didn't miss any deadline. We released that last week. I'm using it on this computer,right now, today, as are thousands of other people to use their ada and interface with ada. After we've tested Flight for a bit and we go through the normal QA-process we then update Daedalus mainnet so that's either going to happen this week or if the testing process takes a little bit longer next week but that's going to happen and that's version 2.2 for Daedalus mainnet but Flight is always released first. Why? Because it is the beta test.
Google does this with Chrome. They have something called Google Chrome canary versus regular everyday Google Chrome and canary always gets features before regular everyday Chrome and canary users get to enjoy those features first accepting that maybe just maybe there's a bug and maybe just maybe something crashes or something like that. That's the risk you take for bleeding-edge software. If you go to any Linux distribution you'll see distributions that are considered to be older and more stable or frankly any open source software project. You tend to have the official release and then you tend to have a bleeding-edge release that follows the software that gets features first.
We're no different in that respect so there's a whole brigade of people right now running around saying because we're making some videos and writing some ECIPs (= Ethereum Classic Improvement Proposals) with a completely unrelated team in the Ethereum classic world we're now missing Cardano deadlines when last week we did ship a Daedalus update. We shipped an update to Flight which I'm using and many other people are using for their ada on the mainnet. Not a testnet release (but) mainnet release! I just don't get it and maybe these people are just so stupid they can't understand the difference between these two clients and why these two clients exist. If that's the case okay but we really don't appreciate these types of comments floating around. There are two clients. Flight and mainnet...
We will always release Flight first. That's our go-to for a release especially if it's a big release and there's a lot of stuff to do. We tend to beta test it with the Flight users before we release it on mainnet because that's much more stable, that's much more test embedded software but if you're using mainnet that means you don't get (certain) features. First you have to wait some weeks to get those features. That's the difference between the two and no deadlines have been missed. I don't understand why people are saying that . They're just pitiful opportunists okay.
It's just our industry as a whole. So much misinformation seems to flow around our industry. There's zero accountability to anybody who lies in this industry. There's zero accountability for people who spread rumors and base speculation and so forth. You just get used to it. I've been around for eight years, it's just my life as it is and every single day I wake up go through my feed and I just see so much so many lies, bullshit, so many trolls, so much FUD and so forth... So, think for yourself people! If it doesn't make sense, if the fact pattern doesn't connect it's probably a lie. I don't understand why people when we release something say we don't release something when people are right now today using that update reporting over twitter how great the update is that the update doesn't exist.
It's our industry as a whole, it is what it is but that's the difference. In case you didn't know. If you go to https://daedaluswallet.io/ and you see this Flight thing and this mainnet thing. What the hell is the difference between the two? Flight is like our version of Chrome canary. It's a bleeding-edge version of Daedalus where the latest greatest things are released there as we get them. They're still of course tested, there's still some level of quality because we use it ourselves but it is not considered to be the mainnet client. The mainnet client which is the safe , reliable one which has been fully vetted and tested is Daedalus mainnet. Both of them run on mainnet, both of them you can use to access your ada. One is for more of a power user a bleeding-edge audience. The other is for grandma and for everyday users who don't want to fiddle with newer software which may have bugs in it or may have some unexpected results in it and so forth. So, they have different user bases and they have different purposes. One helps us beta test things as they come through and the feedback that we get there helps make the mainnet client better and occasionally allows us to fix things on the main net client. So, that there's a consistent user experience throughout the majority of people who use Cardano but they're both considered to be Daedalus wallets and we will always release one faster than the other one.
No external project, nothing I do... I'd go fishing or not fishing... I can work 24/7 or not work 24/7... This reality is not going to change (nor it) has any impact on that okay. So just wanted to mention that to everybody real quickly and get that out of the way. I can't believe sometimes the things that I read but then again this is our industry it's just stupid and there's a lot of stupid people in this industry and as I said in prior videos when I see it I kick them in the teeth.
I just don't have patience or tolerance for them. This is other people's money, this is a big ecosystem and every time people lie, spread FUD, disinformation with the express con-desire to cause chaos or value depreciation those people are bad people and every time we encounter them we will kick them in the teeth and call them for what they are, bad people, the paint-chip brigade.
Thanks so much for listening. Talk to you soon.
Video: https://www.youtube.com/watch?v=jmxuVU-oXKM
Paint chip: https://www.urbandictionary.com/define.php?term=Paint%20chips
Google canary approach (Daedalus Flight): https://iohk.io/en/blog/posts/2020/04/01/we-need-you-for-the-daedalus-flight-testing-program/
submitted by stake_pool to cardano [link] [comments]

Let's Try Nicehash OS! - YouTube Ethereum Wallet Linux Install Ethereum Mist Wallet - How to fix when blockchain won't ... Mining Ethereum with my gaming PC! Litecoin / Bitcoin Mining, Linux BAMT Thumb Drive OS

MyEtherWallet (MEW) is a free, open-source, client-side interface for generating Ethereum wallets & more. Interact with the Ethereum blockchain easily & securely. Ethereum is a global, decentralized platform for money and new kinds of applications. On Ethereum, you can write code that controls money, and build applications accessible anywhere in the world. Ethereum wallets are applications that let you interact with your Ethereum account. Think of it like an internet banking app – without the bank. Your wallet lets you read your balance, send transactions and connect to applications. You need a wallet to send funds and manage your ETH. Your wallet is only a tool for managing your Ethereum account. That means you can swap wallet providers at ... Ethereum has established itself as one of the big players in the cryptocurrency world. It's value has been on a steady rise for well over a year, and it's one of the most widely traded coins in the world. Ethereum is also an open source technology, and the Ethereum blockchain is powering a whole new wave of web development and web technologies ... Ethminer provides a simpler way to mine Ethereum on Linux. This is an open-source tool that supports a huge variety of different computer specifications. It’s a command-line only tool so there’s no one-click mining button or pretty interface to use. Get Started Mining Ethereum Today! The basic process of mining Ethereum is similar regardless of your operating system of choice. Create a ...

[index] [2889] [4009] [6904] [1480] [1335] [569] [3442] [6180] [5914] [2532]

Let's Try Nicehash OS! - YouTube

How to install Ethereum and Mist with Fast Sync + Add EtherDelta and ENS - Duration: 23:09. Crypto is Key 12,173 views. 23:09. ... Linus Tech Tips Recommended for you. 22:05. How To Convert pdf to ... A video showing how to fix a stuck wallet. Use this method if your Ethereum wallet is stuck and a block and won't download the blockchain. . Buy anything on ... This is the error i received today and here is how i solved or fixed it. It's a short video guide or tutorial on how to fix this error: Failed to Open a Sess... Mastering Ethereum https://amzn.to/2BSRpYX My litecoin / bitcoin setup - using Linux BAMT, how to restart the process, what it looks like, where the important files are. I hope this tutorial helps ... Set up your development environment for Ethereum on a Linux machine - Duration: 15:12. ChainSkills 7,195 views. 15:12. Installing Ethereum in Debian 9 (Stretch) - Duration: 12:55. ...

#