Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We created a "free" t2.micro instance in order to test remote backups. I fully understand there may be limitations due to the "free" and "small" nature of the instance, but where can I find some details regarding the expected network bandwidth?
In our tests so-far, we are only getting about 7 MByte/sec throughput, which is well below what we expected.
Ive poked around on the AWS website, but I can't seem to locate any documentation on network bandwidth as it relates to the "price" and "size" of the instance.
Any pointers would be appreciated.
It would probably be more reliable if you would ask Amazon directly - tested there once an EC2 instance and their support had always been very quick and precise (maybe things have changed now ).
Saying this because there might be multiple factors involved - maybe the geographic location in which your instance is based has some problems or limitations, maybe the way that you're running the test hits some other limitations (CPU/disk/RAM), etc... or maybe your expectations are just too high.
It would probably be more reliable if you would ask Amazon directly - tested there once an EC2 instance and their support had always been very quick and precise (maybe things have changed now ).
Saying this because there might be multiple factors involved - maybe the geographic location in which your instance is based has some problems or limitations, maybe the way that you're running the test hits some other limitations (CPU/disk/RAM), etc... or maybe your expectations are just too high.
Yes, we did ask amazon support, but never got a response. We fully understand that we are at the mercy of any networking equipment between the source and target sites and can't do anything about that, but as for expectations, I think our goal is to try and find out what kind of cost would meet our performance needs. That said, 7MB/sec is just too slow, but we don't need full gigabit connectivity either. The goal would be to be able to take backups over several hours, so something in the neighborhood of 100-200 MB/sec is closer to what we need.
Ok, well that's weird that you never got any response.
In any case if I remember correctly, when testing the bandwidth of amazon EC2 vs. "ovh.de" (I used to have a VM there), I used to get 25MB/s.
That was great for me so I did not investigate if the 25MB/s were limited by amazon or by ovh.de.
I can't seem to locate any documentation on network bandwidth as it relates to the "price" and "size" of the instance.
Any pointers would be appreciated.
As part of AWS’s Free Tier, new AWS customers can get started with Amazon EC2 for free. Upon sign-up, new AWS customers receive the following EC2 services each month for one year:
750 hours of EC2 running Linux, RHEL, or SLES t2.micro instance usage
750 hours of EC2 running Microsoft Windows Server t2.micro instance usage
750 hours of Elastic Load Balancing plus 15 GB data processing
30 GB of Amazon Elastic Block Storage in any combination of General Purpose (SSD) or Magnetic, plus 2 million I/Os (with Magnetic) and 1 GB of snapshot storage
15 GB of bandwidth out aggregated across all AWS services
1 GB of Regional Data Transfer
t2.micro not really for anything but testing and casual use
Even if you got good network performance out of one, it is temporary. t2.micros are alloted "credits" for cpu useage and a I guarantee they get lowest priority on the network. They're really for just testing.
google "amazon ec2 instance types" for more info on t2 instances.
If you can still launch an m1.small, that's about as low as you want to go, but they're being phased out. Other than that, m3.medium, and those are about $100/mo; additional storage extra. You could use s3, but if you already have an instance, you may as well use EBS.
One of the basic isses is that I don't fully understand all the different services and how they interact with each other and with my hosts at the data center.
Ive tried several times to get hold of someone at Amazon, but they are not very responsive.
Ideally, I would just prefer to be able to access a cloud filesystem or cloud iSCSI storage directly from my hosts, as that would seem to be the simplest approach.
Using iSCSI would also allow me to perform block-level incremental dumps from the database, meaning I would only update changed blocks when re-syncing the primary to the cloud. That should minimize the amount of traffic generated. Using a filesystem, I can still perform incremental dumps, but would need a periodic full-dump. The incremental dumps to filesystem would also be additive (requiring additional storage on top of the full dump) where-as iSCSI would allow me to "replace" the existing changed blocks thus keeping the iSCSI luns consistent.
Finally got hold of Sales team at Amazon. Getting them to forward some details on how AWS works and what my storage options may be.
Im not sure that iSCSI will work though, it sounds like the interface may be a bit different than im use to from a SAN storage background.
Anyhow, if anyone knows of any iSCSI cloud providers, that would be good to get as a fall-back option.
@usao, Amazon doesn't really fit in this case. It's fine for integrating certain kinds of resources into your local environment, but it doesn't give any way to directly interface with arbitrary storage other than S3 (possibly glacier). And S3 has voodoo involved with regard to it's pricing; better to find someone who'll give you plain storage with an iscsi interface to connect to. Costs will be more deterministic as well, and fixed price usually better than "should be cheaper". But you might do better looking at storage services rather than specifically cloud services.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.