This is a series of articles explaining the build and deployment process on a AWS cloud services. AWS is a Leading cloud services platforms .

Our objective is to create a scalable build on cloud , provisioning of on demand memory , storage and computation resources . In the end we benchmark and performance test the results and showed the cost of instance. This will be a single host testnet on a 16GB cloud instance on AWS using dawn 4.2. 

1 .Steps for acquiring AWS membership

Instructions for creating an account can be found on aws main page

Once we create a AWS cloud account , we can view our EC2 dashboard form where we can launch our desired instance. This instance is a fully functional computer all hosted inside the AWS cloud .

For the sake of our example we created a m5.xlarge linux instance with + 4 vCPUs, 80GB storage and 16GB RAM, its priced at 0.192 $/hour and we will utilize it for 20 hours of testing which will cost us approximately 4$. Pricing table can also be viewed at

2. Choosing a compatible amazon machine Image (AMI)

From list of compatible operating systems we selected Ubuntu 16.04 , there are other recommended operating systems as well mentioned on the github page.

The list available in amazon wizard allows to select the suitable operating system and its version with pre-configured essential softwares in it.

3. Configuring the Instance

The EC2 wizard allows to configure the setting , IP address , login passwords .Although 30GB storage is available within free tier but we can acquire more storage as well if needed. For our specific task we configured 80 GB drive.Once all settings are complete we can see the summary of the computation resources which we have configured .

4. Connect with RDP

Remote desktop Protocol (RDP) permits connecting our local computer with the remote instance deployed on Amazon , there is a list of remote desktop tools which allow connecting your computer with a remote amazon instance. We used RealVNC

5. Perform the build process

Once we are connected to remote computer we can repeat the same build steps which we performed for our local environment

6. Understanding the hourly billing and resource capacity of the AWS instance

We utilized the instance for 20 hours , costing us around 4 $ in total along with its storage and CPU provisioning capacity. This can be seen in the billing alerts dashboard of our account. There are several guidelines and about understanding the cpu , memory and pricing

7. Checking performance

In order to assure 1000 TPS , there are is a plugin designed by and available on its github page . This plugin is available here

EOS community is also eager to know what happens to the performance if we slightly modify the parameters related to single thread transaction , with or without signature and JITter , signle and multinode setup.

These aspects will be highlighted in our subsequent posts.

8. Checking Scalability and instance Resizing without losing data

This is the most promising feature of AWS cloud , we can upgrade or downgrade our instance based upon our CPU and RAM requirements . This is not possible on local builds because in our local builds or docker setup we are restricted to certain types of runtimes , memory and CPU resources.  The process of AWS instance rescaling involves taking a snapshot of instance , storing the state of operating system and softwares and re-attaching it with the revised memory and CPU infrastructure.

Next steps

In next step we will explain further details of (7) & (8) and some recommendations on how eos community can benefit from the features provided by AWS .

Your Remaining Votes (within 24hrs) : 10 of 10
3 votes, average: 4.67 out of 53 votes, average: 4.67 out of 53 votes, average: 4.67 out of 53 votes, average: 4.67 out of 53 votes, average: 4.67 out of 5 (3 votes, average: 4.67 out of 5)
You need to be a registered member to rate this.
(127 total tokens earned)