Webchain AWS Profitability Compared to Renting a Rig

Previously we have discussed mining profitability in Amazon Web Services (AWS) in regards to CPU and GPU mining.  If you have not read the previous articles you may want to browse them.  This article will be building on the CPU and GPU mining profitability articles.  I know, it is like the first day of class and the teacher assigns homework.

Based on the interesting findings in the CPU and GPU testing I was curious about rig renting.  Rig renting is where you buy rig time and reap the coins during that time.  Taking a quick survey of the rig rental landscape for Cryptonote v8 rental costs I found cost to be ~$0.40 USD per 1 kH/s per hour.  Many rig rentals are paid in bitcoin or other coins and currently coins are volatile.  So, when I say a rough estimate I mean rough

Cyptonote v8 was chosen as it is the most recent fork for some cryptonote coins which also includes bulletproofs.  Bulletproofs lower transaction cost be decreasing blocksize.  Webchain already has a low transaction cost so comparing to Cryptonote v8 rigs seems to be the best comparison.  Webchain uses a special algorithm forked from CryptoNightV7 and altered to be CPU friendly, GPU difficult, and ASIC resistant.  Why not compare directly to an existing Webchain rig rental?  Because there are no Webchain rig rentals.

Okay but why Webchain?  Quite simply because it is currently the most profitable CryptoNight coin to mine.  Mining Webchain at 1 kH/s hash rate for 24 hours yields about 174.131 WEB or roughly $0.58 USD.  A Webchain coin (WEB) is worth about $0.0033 USD.  Solo mining at 1 kH/s will yield a block find about every 6 hours and 20 minutes.  The reward for a block find is currently 47.652 WEB.  So, Webchain was chosen for profitability as well as minability.  This should provide a good starting point for rig rental versus AWS mining.

The Experiment

Using previous results from CPU and GPU mining profitability on AWS as a starting point, determine current data for Webchain.


A base image for testing was created on Ubuntu 18.04.  The instructions for creating the base image can be found here.

The AWS instance used was a g2.8xlarge which was determined to be the sweet spot for GPU mining based on hash rate to cost.  This provided 4 NVIDIA K520 GPUs at 300 H/s.  The CPU (Intel Xeon E5-2650) hash rate for this instance type is also good at ~40 H/s per CPU.   AWS allows for the selection of threads per CPU so this was adjusted to 20 out of the possible 32 to fully utilize the L3 memory of 40MB.

For this round of experiments I used xmrig.  LOUD Mining created a set of xmrig for Webchain that includes CPU, NVIDIA GPU, and AMD GPU miners.  I used the Webchain CPU and NVIDIA GPU miners.  The mining will be on the Krunzy Webchain Pool with a static difficulty of 25000.  The configuration for Webchain will be the same for Windows and Linux.  Here are the configs and running values:

CPU Miner Configuration

Webchain CPU Miner configuration

GPU Miner Configuration

Webchain NVIDIA GPU miner configuration

CPU Running Values

XMRIG CPU running values

GPU Running Values

XMRIG GPU running values


Testing was conducted through SSH on the g2.8xlarge instance with a cost of $0.78 USD / h.  A session for CPU miner and GPU miner were created to easily track hash rates.  The experiment was conducted for 90 minutes to allow the mining pool to provide accurate hash rates.


The data collected included hash rate as well as some power costs.  The hash rates remained fairly consistent over the experiment within 20-40 H/s.

Hash Rates after 60 minutes
Miner Reported CPU x 20 = 800-840 H/s
Pool Reported CPU x 20 = 600 H/s

Miner Reported GPU x 4 = 1180 H/s
Pool Reported GPU x 4 = 900 H/s

Total Miner Hash = ~ 2000 H/s
Total Pool Hash = ~ 1500 H/s

Power Costs
In this experiment, 2 x CPU Intel Xeon E5-2650 were used.  This is based on number of threads where one CPU has 16 threads.  The 20 threads were selected from 32 threads available making the CPUs used equal to 2.  The wattage per CPU of 95W was taking from these specifications

XMRIG reports GPU wattage of 80W per GPU as per Table 1.

GPU K520 Running Wattage
Table 1. GPU Wattage

CPU W = 95W/CPU x 2 CPU = 190W
GPU W = 80W/GPU x 4 GPU = 240W
Total W = 430 W
Estimated power cost as $0.10 USD kWh = $0.0001 USD / W/h
Total power cost per hour = $0.0001 USD / W * 430 W = $0.043 USD /h

The total cost per hour would be $0.78 for the g2.8xlarge instance.  Presumably this is covering the power cost of $0.043.  The cost per kH/s is reported pool hash of 1.5 kH/s normalized to 1 kH/s which is $0.52 USD/ kH/s.

As mentioned previously, mining Webchain at 1 kH/s hash rate for 24 hours yields about 174.131 WEB or roughly $0.58 USD.  A Webchain coin (WEB) is worth about $0.0033 USD.  Solo mining at 1 kH/s will yield a block find about every 6 hours and 20 minutes.  The reward for a block find is currently 47.652 WEB.

Renting a rig is about $0.40 USD / kH/s / h

Cost to produce a solo block of WEB at 1 kH/s yielding 47.652 WEB or $0.16 USD

AWS Mining = $0.58 USD * 6.33 h = $3.67 USD (net loss $3.51 USD)
Rig Rental Mining = $0.40 * 6.33 h = $2.53 USD (net loss $2.37 USD)


In a game of pennies a nickel is a lot.  Profitability is influenced by coin value and the cost to create a coin.  Webchain does not have much value currently at a third of a penny.  So, producing WEB is costly.  In fact, the energy costs outweigh mining WEB currently. 

But that’s not why the experiment was run.  We want to answer if AWS mining is viable compared to rig rental.  Simply, AWS mining is over a dollar more per hour on average compared to rig rental.  Is a dollar difference a lot?  Absolutely when dealing with low margins.  The rig rental seems the way to go for now, however, there are no Webchain rig rentals that were found at the time of the article.  Which puts Webchain mining back to your own rig or setting something up in AWS.  If you believe in the future value of WEB then this cost may be acceptable but a straight purchase would be cheapest at this time.

So, when does the break even point happen? Let’s make a few assumptions based on the current data.  It takes 6.33 hours to produce 47.652 WEB on average.  Normalizing to WEB produced per hour we get 7.528 WEB/ h.  Also, based on 1 kH/s the power cost alone is $0.043 USD /h, AWS mining is $0.58 USD/ h, and Rig Rental is $0.40 USD/h.  With assumptions in hand let’s crush some numbers to find the break even values.

Your own rig (power cost) = $0.043 USD/h / 7.528 WEB/h = $0.0057 USD/WEB
Rig rental = $0.40 USD/h / 7.528 WEB/h = $0.0531 USD/WEB

AWS mining = $0.58 USD/h / 7.528 WEB/h = $0.0770 USD/WEB

Running your own rig and paying only power costs based on the AWS setup is 10x cheaper than renting a rig.  A little more than half a penny will be break even for your own rig and a little while back WEB had reached $0.007 USD.  But not everyone can afford to purchase or maintain a rig. 

With no expertise and the will to mine a rig can be rented.  To break even, the cost of WEB would need to reach $0.0531 USD/WEB to make a rig rental viable.

You can still roll your own and create a miner in AWS.  To break even here, based on the current experiment, WEB would need to reach $0.077 USD.  However, you read the CPU Mining Profitability article and you know there is a better AWS rig.  Gold star for you!

Webchain is mineable by CPU and not as good at GPU mining.  We learned previously that we could spin up several cheap CPU instances in AWS. We then see a break even profitability of $0.05 USD also known as the shiny nickel.  It is also at this point that AWS edges out the rental by $0.0031 USD/h.  Remember, fractions of a cent are important in the game of pennies.

Whether new to mining or an old pro, attention to cost to mine is just as important as a huge hash rate.  Calculating the cost to produce a coin to its current value provides insight into when to purchase hash rate.  Bigger is not always better if the costs are too high.  You may actually be more profitable at a lower hash rate.

If you like this article, please think of tossing us some coins to help pay for fluffernutters.  Thank you.

AWS CPU Mining Cryptonote Profitability

We have explored the profitability recently of GPU coin mining on Amazon Web Services (AWS).  But GPU mining is not the only way to get coins.  Hey, what about CPU mining?

Glad you asked!  There are some cryptocoins that are created to be CPU mined.    The coin currently that is slipping in and out of most profitable is Webchain.  Webchain has developed its own cryptonight alogrithym specifically targeted at CPU mining called CryptoNight-Webchain.  The algorithm seeks to not only be CPU mineable but also deter GPU and ASIC mining.  In other words, focus on delivering a coin mineable by just about any CPU on any device.

The Experiment

Determine if CPU mining in AWS could be profitable.  Find the best combination of low cost, CPU power, and coin return.


As mentioned earlier, the test mined Webchain coin.  The choice of Webchain was that it is specifically CPU mined and is one of the most profitable CPU mined CryptoNight coins.  

I used webchain-miner to mine Webchain coin. This miner was selected as it is the miner Webchain created and suggests.  Also, the miner is easily available and comes in several OS versions.  The version used was the linux amd64 miner.

The OS platform was Ubuntu 18.04.  The OS was setup as I explained here.  The webchain-miner application was downloaded from releases so no compiling was needed.

The webchain-miner was configured to use pool.webchain.network:3333 (variable difficulty) with no specific CPU configuration.  Testing was across several types of instances with differing CPU.  Because of this, CPU configuration was left as default or not set.

Finally, in order to auto run the webchain-miner when it loads the following was added to /etc/rc.local:

#!/bin/sh -e
cd /home/ubuntu/webcpu


AWS has many instance types with viable CPU configurations.  To reduce the test base we made the first cut as we would not use a T type instance. 

T type instances have the ability to boost beyond 100% CPU when under load.  This would not be controllable and had one other serious and expensive downside.  The T types when bursting incur $0.05 per vCPU-Hour.  So, the cheap T3.Micro for less than a penny an hour just became a nickel.  A >500% increase in cost when consistently at 100% CPU.  So, T types are out.

What we wanted was a CPU work horse for a good price.  To reduce the compute optimized instances more we looked at L3 cache size of the CPU.  The L3  limits the number of CPUs that can be used for mining.  The equation can be simplified to L3 cache / 2 = # of CPUs that can be used.  So, 24MB L3 cache with 24 CPUs would only be able to use 12 CPUs to mine. This limited us to C type instances.

The C5 instance types have good CPU power with high network speed.  Also, since C5 is relatively new the spot pricing is as much as 70% off normal price.  The testing was performed on C5.18xlarge, C5.4xlarge, and C5.2xlarge.


The following is a snapshot of each C5 testing run using the webchain-miner.  Each snapshot shows the CPU info including L2/L3 cache and the number of threads webchain-miner used for mining.  Also, hash rate is shown on the last line as Hash/second.

* VERSIONS:     webchain-miner/ libuv/1.20.3-dev gcc/6.3.0
* CPU:          Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz (1) x64 AES-NI
* CPU L2/L3:    32.0 MB/24.8 MB
* THREADS:      12, cryptonight-webchain, av=0, donate=5%
* POOL #1:      pool.webchain.network:3333
* COMMANDS:     hashrate, pause, resume
[2018-10-07 23:15:55] READY (CPU) threads 12(12) huge pages 12/12 100% memory 24.0 MB
speed 2.5s/60s/15m 644.1 644.1 n/a H/s max: 644.1 H/
* VERSIONS:     webchain-miner/ libuv/1.20.3-dev gcc/6.3.0
* CPU:          Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz (1) x64 AES-NI
* CPU L2/L3:    8.0 MB/24.8 MB
* THREADS:      12, cryptonight-webchain, av=1, donate=5%
* POOL #1:      pool.webchain.network:3333
* COMMANDS:     hashrate, pause, resume
[2018-10-08 11:19:48] READY (CPU) threads 12(12) huge pages 12/12 100% memory 24.0 MB
[2018-10-08 11:19:49] use pool pool.webchain.network:3333
speed 2.5s/60s/15m 486.4 486.6 n/a H/s max: 486.8 H/s
* VERSIONS:     webchain-miner/ libuv/1.20.3-dev gcc/6.3.0
* CPU:          Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz (1) x64 AES-NI
* CPU L2/L3:    4.0 MB/24.8 MB
* THREADS:      8, cryptonight-webchain, av=1, donate=5%
* POOL #1:      pool.webchain.network:3333
* COMMANDS:     hashrate, pause, resume
[2018-10-08 11:35:50] READY (CPU) threads 8(8) huge pages 8/8 100% memory 16.0 MB
speed 2.5s/60s/15m 321.8 321.9 n/a H/s max: 322.5 H/s

Table 1 shows CPU and hash rate as it relates to cost.  A higher Hash/Spot is desired as it equates to higher hash with low spend.

AWS C5 instance type CPU to Hash rate
Table 1

The C5.18xlarge had the highest hash rate but also did not use all its processing power.  Th C5.18xlarge has 72 CPUs but was only able to utilize 12 CPU for mining.  The reason is the L3 cache is 24MB which allows for only 12 CPU available for mining.

Similarly the C5.4xlarge did not fully utilize all its CPUs but had the second highest hash.  Out of 16 CPU the C5.4xlarge was able to mine with 12 based on a 24MB L3 cache.

The highest utilized CPU was the lowest hash rate of the C5.2xlarge.  All 8 CPUs were utilized.  C5.2xlarge also is the least costly at $0.13/hour.

CPU utilization to cost is well shown in Table 1.  C5.2xlarge is not only the least cost but also has the most utilized CPU.  As hash to cost is concerned C5.2xlarge is the clear winner with a Hash/Spot of 2476 hash/hour/dollar.


The results were not unexpected as higher CPU utilization at lower cost converged on the least expensive most utilized C5 instance, the C5.2xlarge.   Another finding during testing is the smaller C5.2xlarge have more spot availability.  We were able to get some spot pricing as low as $0.08.  We also found we could spin up 50 C5.2xlarge in one spot request.  For $4 an hour you could spin up a hash rate of about 16 kH/s.  But is it profitable?

Currently, Webchain is trading for around $0.005 but has been as high as $0.007 in the past 2 weeks.  For now, we assume $0.005 is the coin price.  As mentioned above, 16 kH/s is attainable for $4 per hour.  In 24 hours we would spend $96 and make 2,395 WEB worth $11.55.  A net loss of about $85.  Obviously not profitable. 

So, when would WEB become profitable to mine via AWS C5?  Assuming cost does not change for AWS then the value of WEB would need to increase about 10x.  Also, the difficulty should not be remarkably different then when the testing was run.  With all those caveats, when WEB reaches one shiny nickel ($0.05) the break even point is reached.  

Could WEB become profitable to mine via AWS and CPU?  It certainly seems there could be a future convergence of WEB to Cost.

Hopefully you found this useful.  Throw us a few coins to say thanks.

AWS GPU Mining Cryptonote Profitability

Anyone who has any idea about Amazon Web Services (AWS) pricing already knows the answer to mining profitability on AWS.  Simply, it is not profitable at this time.  But that is not why we are here.  We are here to review an experiment I recently performed at AWS.

The Experiment

Find the sweet spot for running GPU mining tests in AWS.  The sweet spot would combine GPU power with lowest cost.  Classic cloud computing  formulae.


I used xmr-stak to mine Leviar coin (XLC).   I chose xmr-stak mostly out of familiarity but also because xmr-stak tries to optimize itself on startup when no configs are present.  Something I will require in just a bit.  Whereas Leviar was a coin that had a low difficulty in the Cryptonote v7 algorithm.  I wanted enough hash rate to easily see differences.  A heavier algorithm or greater difficulty would reduce hash rate as much as a factor of 10.  Plus, I wanted some Leviar and they had a nice pool to see hash rates.

We will need an AWS machine image (AMI) with xmr-stak miner.  When I checked the AWS Marketplace I found no such thing [DUH].  I was going to do it myself anyway.

Here is the base I used:   How to Setup Ubuntu 18.04 for Coin Mining.  I installed xmr-stak using the instructions on xmr-stak Git.  The xmr-stak was configured for the Leviar pool.  Important, remove any gpu and cpu configs created by xmr-stak.  We will need xmr-stak to create new configs based on the EC2 instances we run the AMI on. 

The last piece was to make an AMI of the fully functional Ubuntu xmr-stak miner EC2 instance.  The AMI will be used to launch EC2 instances that we can test.


I chose to test p2, p3, g2, and g3 instance types.  These are the only instance types that come with a GPU on launch.  Also, all the types are NVIDIA and require CUDA for xmr-stak  These can be fairly expensive so the goal is to run less than an hour.  To keep costs low the instances were launched with spot pricing.  All costs will reference the spot prices.

To test, a GPU instance was launched and xmr-stak started via SSH.  Data was collected at 60 seconds.  The pool settings were the same for all tests.  Unfortunately I did not use a static difficulty but by 60 seconds the difficulties were about the same.

Here is an example xmr-stak config used for g2.8xlarge testing:

"cpu_threads_conf" :
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 0 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 1 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 2 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 3 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 4 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 5 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 6 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 7 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 8 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 9 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 16 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 17 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 18 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 19 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 20 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 21 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 22 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 23 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 24 },
    { "low_power_mode" : false, "no_prefetch" : true, "affine_to_cpu" : 25 },


"gpu_threads_conf" :
  // gpu: GRID K520 architecture: 30
  //      memory: 3985/4037 MiB
  //      smx: 8
  { "index" : 0,
    "threads" : 42, "blocks" : 24,
    "bfactor" : 0, "bsleep" :  0,
    "affine_to_cpu" : false, "sync_mode" : 3,
  // gpu: GRID K520 architecture: 30
  //      memory: 3985/4037 MiB
  //      smx: 8
  { "index" : 1,
    "threads" : 42, "blocks" : 24,
    "bfactor" : 0, "bsleep" :  0,
    "affine_to_cpu" : false, "sync_mode" : 3,
  // gpu: GRID K520 architecture: 30
  //      memory: 3985/4037 MiB
  //      smx: 8
  { "index" : 2,
    "threads" : 42, "blocks" : 24,
    "bfactor" : 0, "bsleep" :  0,
    "affine_to_cpu" : false, "sync_mode" : 3,
  // gpu: GRID K520 architecture: 30
  //      memory: 3985/4037 MiB
  //      smx: 8
  { "index" : 3,
    "threads" : 42, "blocks" : 24,
    "bfactor" : 0, "bsleep" :  0,
    "affine_to_cpu" : false, "sync_mode" : 3,



The results of the tests were about what could be predicted.  More GPUs meant more hash rate.  However, between the 2 series (p2, g2) instances  and the 3 series (p3, g3) instances the GPU power were different.  The 2 series use NVIDIA K520 GPUs whereas the 3 series uses
NVIDIA Tesla V100 GPUs.  In general the hash rate of the 2 series was 300 H/s and the 3 series was 412 H/s per GPU.  About a 27% difference in GPU power.

CPU hash rates were counted as part of hash rate.  Part of determining profitability would be to include all computing power.  As such the g2, p2, and p3 instance types were within 10% difference in CPU power.  Surprisingly, the g3 instances had low CPU hash rates and were over 60% less computing power.

Table 1:  Hash rates and costs per hash rate across instance types
Table 1:  Hash rates and costs per hash rate across instance types

Factoring compute power together with instance cost we get Table 1.  The heat map shows favorable areas as green decreasing to red.  The highest hash rate also belongs to the highest cost instance.  The p2.8xlarge instance type had the highest GPU at 2400 H/s and tied for highest CPU hash rates at 1440 H/s.  It therefore had the highest hash rate at 3840 H/s.  With a whopping cost of $2.46/ hour.

The lowest hash rate belonged to the smallest instance, p2.8xlarge.  It had a GPU of 300 H/s and CPU of 180 H/s.  Although it also was not the cheapest instance at $0.27.

Knowing the highest and lowest performing instance types we will now look at hashing power per dollar spent per hour.  This will determine our sweet spot of best spent dollars for the hash rate.  Larger numbers are better.  Our highest performer clocks in at 1560 kH/$/h.  Basically, the cost is halving the hash rate per hour.  The lowest performer ran at 1778 kH/$/h.  In these cases the highest performer and lowest performer by hash rate scored lowest in hash compared to cost per hour.

Time now for the sweet spot.  In Table 1, the best hash rate cost per hour goes to the g2.8xlarge instance type.  The 4 GPUs and large number of CPUs with a sub dollar cost clinched the sweet spot title.  At 3385 kH/$/h it is the best spent money to hash rate. 

Or is it?  The g2.2xlarge costs less than a third of the g2.8xlarge.  This means 3 g2.8xlarge could run for less than one g2.8xlarge.  Running 3 more instances would triple the hash rate but only to 1960 H/s.  Still well under g2.8xlarge and placing it more towards the g3.8xlarge cost performance.  The g2.8xlarge holds the sweet spot title.

We found the sweet spot, so how much can we make?  The g2.8xlarge instance type can produce 8.72 XLC/($/h).  Which means if I normalize to one dollar per hour I get about 8.72 Leviar coin.  Leviar was about a penny a coin at the time of the test.   So, for every dollar spent on running the instance we would get back 8-9 cents per hour.  Not a good return even with bad math skills.  In fact, to break even, Leviar (XLC) would need to be more like 10 cents per coin and cover most costs at 87 cents an hour.  The issue is, when 10 cents a coin happens what is the difficulty and how would it effect hash rate?


Currently mining Leviar coin in AWS is not profitable.  This is not to say a break even point could not be reached.  However, if you are looking at break even you may just want to purchase the coin.  If you are like me and think cryptocoins will see their day then mining may be worth while. 

Certainly for testing purposes AWS is cost effective.  I cannot afford a large rig and so can test my ideas in AWS in under an hour for less then a dollar.

And now some interesting math.  At one dollar an hour you would spend $720 in 30 days and would mine 6,480 XLC.  I bet you could buy a pretty descent GPU for $720.  The point, it all depends on where you want to spend those dollars and if you want coins to HODL.

Hopefully you found this useful.  Throw us a few coins to say thanks.