You can export an AWS AMI as a VMDK file using the AWS VM Import/Export service. This process involves exporting the AMI to an Amazon S3 bucket, from where you can then download the VMDK file.
Here's a detailed breakdown of the steps and important considerations:
1. Prerequisites and Limitations:
Before you start, be aware of these crucial points:
- AWS CLI: You'll primarily use the AWS Command Line Interface (CLI) for this process, as it's the most common and robust way to manage VM Import/Export tasks. Ensure you have it installed and configured with appropriate AWS credentials.
- S3 Bucket: You need an Amazon S3 bucket in the same AWS Region as your AMI to store the exported VMDK file.
- IAM Role (vmimport): AWS VM Import/Export requires an IAM service role (typically named
vmimport) with the necessary permissions to read from and write to your S3 bucket. If you don't have this, you'll need to create it. AWS documentation provides the required trust policy and permissions. - AMI Export Limitations:
- Not all AMIs can be exported. You generally cannot export AMIs that contain third-party software provided by AWS (e.g., Windows or SQL Server images, or any AMI from the AWS Marketplace), unless they were originally imported as "Bring Your Own License" (BYOL) through VM Import/Export or AWS Application Migration Service (MGN)/AWS Elastic Disaster Recovery (AWS DRS).
- You cannot export an AMI with encrypted EBS snapshots.
- VMs with volumes larger than 1 TiB might have limitations (though for larger disks, you might need a manifest file).
- You can't export an AMI if you've shared it from another AWS account.
- You can't have multiple export image tasks in progress for the same AMI simultaneously.
- Volumes attached after instance launch are not exported; only those specified in the AMI's block device mapping are included.
- Consistent AMI: For the best results, ensure your AMI was created from a stopped instance or an application-consistent snapshot to avoid data corruption in the exported VMDK.
2. Steps to Export the AMI as VMDK:
a. Create an S3 Bucket (if you don't have one):
You can do this via the AWS Management Console or AWS CLI.
AWS Console:
- Go to the S3 service.
- Click "Create bucket."
- Provide a unique bucket name.
- Choose the same AWS Region as your AMI.
- Keep "Block all public access" enabled unless you have a specific reason to disable it (and understand the security implications).
- Click "Create bucket."
AWS CLI:
aws s3 mb s3://your-export-bucket-name --region your-aws-region
b. Create the IAM Role (if you don't have one):
This is crucial for allowing AWS VM Import/Export to interact with your S3 bucket.
- Create a trust policy file (
trust-policy.json):JSON{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "vmimport" } } } ] }
2.1 Create the IAM role (vmimport):
bash aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
3. Create an inline policy file (role-policy.json):
Replace your-export-bucket-name with your actual bucket name.
json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "ec2:DescribeTags", "ec2:CreateTags" ], "Resource": "*" } ] }
4. Attach the inline policy to the vmimport role:
bash aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
c. Start the Export Image Task:
Use the export-image command with the AWS CLI.
aws ec2 export-image \
--image-id ami-0123456789abcdef0 \
--disk-image-format VMDK \
--s3-export-location S3Bucket=your-export-bucket-name,S3Prefix=exports/ \
--description "Exported VMDK from MyWebApp AMI"
- Replace
ami-0123456789abcdef0with the actual ID of your AMI. - Replace
your-export-bucket-namewith the name of your S3 bucket. S3Prefix=exports/is optional but good practice to organize your exported files within the bucket.--disk-image-format VMDKspecifies the output format. You can also chooseVHDorRAW.
d. Monitor the Export Task:
The export process can take some time, depending on the size of your AMI. You can check the status using the describe-export-image-tasks command:
aws ec2 describe-export-image-tasks The Status field will show the progress (e.g., active, completed, deleting, deleted). Wait until the status changes to completed.
3. Download the vmdk - for this, I tried s3cmd which didn't work, based on this discussion. Just installing aws cli with the method outlined here, and
aws s3 cp s3://our-vm-export-2025/exports/export-ami-our-vmdk-file.vmdk localfile.vmdk
worked, after
aws configure
and entering the access key and secret key (which PB had to generate and give me).
4. Running the vmdk and verifying it - It turned out that just creating a new VM with the VMDK file as the root file system - (under advanced) - would make it run, but the network interface provided by Virtualbox would be different from that provided by AWS EC2 - so, had to log on to the console. Since there was no password set (or I couldn't remember it), followed this chroot procedure to set a password for root user. For mounting the VMDK file, the easiest way was to go to another virtualbox vm, and in that vm, add this vmdk with a higher SCSI port number as an additional disk - under Settings - Storage for that VM.
(qemu-img convert -f vmdk -O raw aws-image.vmdk aws-image.raw etc and mounting with a loop filesystem was very very time consuming, since our hard disk was slow. Hence, avoid this method.)
Then, we could log in, and as prompted by chatgpt, modify the cloud-init file - copy-pasting below -
ip link # to check for interfaces without having ifconfig installed
#Update /etc/netplan/ (for Ubuntu 18.04+, or /etc/network/interfaces for older systems).
Example for Netplan (Ubuntu):
network:
version: 2
ethernets:
enp0s3:
dhcp4: true
In our case, there was an extra
set-name: ens5
line which needed to be commented out. Then,
sudo netplan apply
ip a # to check if the interface is up, and if it is down,
sudo ip link set enp0s3 up
and now
ip a
should show that the interface has an ip address via dhcp.
In this case, we tested the VM with NAT networking, with port forwarding set on Virtualbox Settings - Network - 2244 for ssh, 8080 for 80, 8443 for 443.
Then, setting /etc/hosts with the ip address of the host - 192.168.1.6 in this case for the various domains we wished to test, the dotnet app worked fine. Wordpress refused to connect - apparently NATted requests are refused by Wordpress by default, and we would need to make some changes to the config file to make it accept such requests to port 8443, like editing wp-config.php with the lines
define('WP_HOME', 'https://our.domain.org:8443');
define('WP_SITEURL', 'https://our.domain.org:8443');
No comments:
Post a Comment