Converting QEMU Windows 2000 images to AWS EC2 AMIs
Why am I doing this?
I have a qcow2 image for various Windows Server 2000 installations and experimenting with various ways to implement them.
Introduction
Moving workloads from on‑premises virtualization environments to Amazon Web Services (AWS) often starts with a disk image. QEMU’s native QCOW2 format is compact and widely used, but AWS expects an Amazon Machine Image (AMI) that it can launch as an EC2 instance. The conversion process is well‑defined, reproducible, and can be scripted for repeatability. Below is a practical, no‑fluff walkthrough that takes you from a local QCOW2 file to a registered AMI ready for production use.
Prerequisites
| Item | Minimum Requirement |
|---|---|
| AWS Account | Permissions for ec2:ImportImage, s3:PutObject, s3:GetObject, ec2:RegisterImage |
| AWS CLI | v2.x installed and configured (aws configure) |
| QEMU tools | qemu-img (part of the qemu-utils package on Linux) |
| S3 bucket | Empty bucket in the same region where you’ll import the image |
| Linux host | Sufficient disk space for the intermediate RAW file (size ≈ original virtual disk) |
| Root / sudo | Needed for mounting or manipulating block devices (optional) |
Tip: Using an EC2 instance in the target region as the conversion host eliminates cross‑region data transfer costs.
Overview of the Workflow
- Convert QCOW2 → RAW
- Upload RAW file to S3
- Import the RAW file as an AMI (
aws ec2 import-image) - Monitor import progress
- Verify and tag the resulting AMI
Each step is described in detail below.
Step 1 – Convert QCOW2 to RAW
# Replace these placeholders with your actual filenames
QCOW2_FILE=ubuntu‑20.04‑server.qcow2
RAW_FILE=ubuntu‑20.04‑server.raw
# Perform the conversion
qemu-img convert -p -f qcow2 -O raw "$QCOW2_FILE" "$RAW_FILE"Explanation
-pshows a progress bar, useful for large disks.-f qcow2tellsqemu-imgthe source format.-O rawselects the destination format required by AWS VM Import.
The resulting RAW file is a flat binary image identical in size to the virtual disk. No compression is applied at this stage; the subsequent S3 upload will benefit from S3’s built‑in server‑side encryption and optional multipart upload.
Step 2 – Upload the RAW Image to S3
# Variables – adjust to your environment
S3_BUCKET=my-import-bucket
S3_KEY=imports/ubuntu‑20.04‑server.raw
REGION=us-east-1 # Must match the region you’ll import into
# Upload (multipart automatically for files > 8 MiB)
aws s3 cp "$RAW_FILE" "s3://$S3_BUCKET/$S3_KEY" --region "$REGION"Considerations
- Encryption: Add
--sse AES256if you want server‑side encryption. - Permissions: The IAM role/user used by the CLI must have
s3:PutObjecton the bucket. - Cost: S3 storage charges apply while the image sits in the bucket; you can delete the object after the import completes.
Step 3 – Import the Image as an AMI
AWS provides two APIs for importing images: VM Import/Export (import-image) and the older ImportInstance. import-image is the recommended method because it supports raw disk images directly.
IMPORT_TASK_ID=$(aws ec2 import-image \
--description "Ubuntu 20.04 Server – imported from QCOW2" \
--license-type BYOL \
--disk-containers "file://<(cat <<EOF
{
\"Description\": \"Ubuntu 20.04 Server\",
\"Format\": \"raw\",
\"UserBucket\": {
\"S3Bucket\": \"$S3_BUCKET\",
\"S3Key\": \"$S3_KEY\"
}
}
EOF
)" \
--region "$REGION" \
--output text \
--query 'ImportTaskId')Key parameters
| Parameter | Meaning |
|---|---|
--license-type BYOL | “Bring Your Own License”; required for most Linux distributions. |
Format | Must be "raw" for the output of qemu-img. |
UserBucket | Points to the S3 location of the uploaded RAW file. |
ImportTaskId | Returned identifier used to poll status. |
Important: Do not use--platformfor Linux images; AWS infers the OS from the image metadata. For Windows images you would specifyWindows.
Step 4 – Monitor Import Progress
The import can take anywhere from a few minutes to several hours depending on image size and network throughput.
aws ec2 describe-import-image-tasks \
--import-task-ids "$IMPORT_TASK_ID" \
--region "$REGION" \
--query 'ImportImageTasks[0].Status' \
--output textTypical status values:
pending– Task queued.active– Import in progress.completed– AMI successfully created.deleted– Task cancelled or failed.
You can also retrieve the AMI ID once the task finishes:
aws ec2 describe-import-image-tasks \
--import-task-ids "$IMPORT_TASK_ID" \
--region "$REGION" \
--query 'ImportImageTasks[0].ImageId' \
--output textStep 5 – Verify and Tag the New AMI
AMI_ID=$(aws ec2 describe-import-image-tasks \
--import-task-ids "$IMPORT_TASK_ID" \
--region "$REGION" \
--query 'ImportImageTasks[0].ImageId' \
--output text)
# Optional – add descriptive tags
aws ec2 create-tags \
--resources "$AMI_ID" \
--tags Key=Name,Value="Ubuntu‑20.04‑QCOW2‑Import" \
Key=Source,Value="QCOW2‑Conversion" \
--region "$REGION"Launch a test instance to confirm that the OS boots correctly, networking works, and any required drivers (e.g., ENA, NVMe) are present. If you encounter boot failures, verify that the source image includes the appropriate cloud‑init configuration or that the kernel supports the AWS hypervisor.
Clean‑Up
After a successful import you can free up storage:
aws s3 rm "s3://$S3_BUCKET/$S3_KEY"If you performed the conversion on a temporary EC2 instance, terminate it to avoid lingering charges.
Common Pitfalls & How to Avoid Them
| Symptom | Likely Cause | Remedy |
|---|---|---|
| Import stalls at “active” for > 24 h | Insufficient IAM permissions on the S3 bucket (missing s3:GetObject). | Grant s3:GetObject and retry. |
| EC2 instance fails to boot (kernel panic) | Kernel lacks the xen or nvme drivers required by the AWS Nitro system. | Rebuild the source VM with a newer kernel or install the linux-aws package before exporting. |
| Incorrect root device mapping | Image was created with a non‑standard partition layout. | Ensure the root filesystem is the first partition, or use the --boot-mode uefi flag during import for UEFI‑based images. |
| Large upload times | Raw image not compressed. | Compress the RAW file with gzip and use --compressed flag in import-image (requires format: raw still). |
Automation Example (Bash Script)
Below is a minimal script that strings the whole process together. Adjust variables as needed.
#!/usr/bin/env bash
set -euo pipefail
# ---- Configuration -------------------------------------------------
QCOW2_FILE="${1:-ubuntu-20.04-server.qcow2}"
S3_BUCKET="my-import-bucket"
S3_PREFIX="imports"
REGION="us-east-1"
# --------------------------------------------------------------------
RAW_FILE="${QCOW2_FILE%.qcow2}.raw"
S3_KEY="${S3_PREFIX}/${RAW_FILE##*/}"
echo "[1] Converting QCOW2 → RAW ..."
qemu-img convert -p -f qcow2 -O raw "$QCOW2_FILE" "$RAW_FILE"
echo "[2] Uploading RAW to S3 ..."
aws s3 cp "$RAW_FILE" "s3://$S3_BUCKET/$S3_KEY" --region "$REGION"
echo "[3] Starting import task ..."
IMPORT_TASK_ID=$(aws ec2 import-image \
--description "Imported from ${QCOW2_FILE}" \
--license-type BYOL \
--disk-containers "file://<(cat <<EOF
{
\"Description\": \"${QCOW2_FILE}\",
\"Format\": \"raw\",
\"UserBucket\": { \"S3Bucket\": \"$S3_BUCKET\", \"S3Key\": \"$S3_KEY\" }
}
EOF
)" \
--region "$REGION" \
--output text \
--query 'ImportTaskId')
echo "Import task ID: $IMPORT_TASK_ID"
# Poll until completed
while true; do
STATUS=$(aws ec2 describe-import-image-tasks \
--import-task-ids "$IMPORT_TASK_ID" \
--region "$REGION" \
--query 'ImportImageTasks[0].Status' \
--output text)
echo "Current status: $STATUS"
[[ "$STATUS" == "completed" ]] && break
[[ "$STATUS" == "deleted" ]] && { echo "Import failed"; exit 1; }
sleep 30
done
AMI_ID=$(aws ec2 describe-import-image-tasks \
--import-task-ids "$IMPORT_TASK_ID" \
--region "$REGION" \
--query 'ImportImageTasks[0].ImageId' \
--output text)
echo "Import complete – AMI ID: $AMI_ID"
# Optional clean‑up
aws s3 rm "s3://$S3_BUCKET/$S3_KEY"
echo "Raw image removed from S3."
Run it as:
chmod +x import-qcow2.sh
./import-qcow2.sh my-image.qcow2Closing Thoughts
Converting a QCOW2 image into an AWS AMI is a deterministic process: convert → upload → import → verify. By adhering to the steps above you eliminate guesswork, keep costs predictable, and end up with a production‑ready AMI that mirrors your original virtual machine.
If you run into permission errors or unexpected boot behavior, revisit the “Common Pitfalls” table before opening a support ticket. The workflow scales nicely: integrate it into CI/CD pipelines, wrap it in Terraform null_resource blocks, or invoke it from AWS Systems Manager Automation for fully automated migrations.
Happy migrating!






