Monday, July 21, 2025

lots of github and Mac learnings

Trying to compile OpenSpace on MacOS - 

Learnt a lot of things from Copilot and ChatGPT - since I didn't have any humans available to ask specific questions. Also bumped against the limitations of the LLMs - sometimes they give wrong advice with full confidence! Perhaps adding prompts to cite sources, mention confidence level etc might help. Some points, in no particular order.
  •  XCode support for some C++ features is missing - C++ language support - and in some cases, AppleClang's interpretation is stricter than gnu c++ or MSVC. Also, some flags specific to MacOS would change from old versions to new versions of MacOS. And homebrew support for older Macs is spotty. So, supporting Macs is a challenge.

  • Verifying and testing on github runners has some issues due to their headless nature.

  • Uploading to draft releases works best if that draft release is pre-created - sometimes the tag creation takes a few seconds, leading to failure in uploading the artifact to the draft release. Or, can create a new tag, wait for a few seconds, and then upload the artifact to draft release using a personal access token - PAT.  

  • MacOS, especially after 15.2, does not make it easy to run unsigned apps. Some workarounds which exist now, like removing the quarantine flag from the internal binary, may not work in the future. It is possible that Debug builds may work and (unsigned) Release builds may not work as in the case of Android apk files - unless we add a codesign step after the build.

  • Even with the bundling of dylibs using otool, the binary still seems to look for dylibs in /opt/homebrew etc. So, the bundling code suggested by ChatGPT still has flaws. Or maybe there are some dependency calls which are only at runtime and hence not detected by otool. 

Saturday, June 28, 2025

solution for rendering black frame & moving a Blender keyframe to a different time

The blend file linked from

would not render out of the box on Cycles or Eevee with Blender 4. (the tutorial was made for Blender 2.8). Thought that one possible reason was that I had modified the number of render samples:

"Samples should not be lower than volumetric samples."

But Cycles was rendering with multiple samples, up to the final moment, when the render would go black. ChatGPT confirmed my hunch that this was due to some compositing settings. Disabled compositing in Render properties --> Post-processing - and then the render was showing up. 4K renders of each frame was taking around 5 to 10 minutes on the 1060TX GPU. Sheepit render farm rendered the project in less than a day, after making modifications to the length of the animation and using a 180 degree fisheye "fulldome" panoramic camera.

For moving keyframes - to make the animation 10x longer, move a keyframe at frame 25 to frame 250, as an example - https://blender.stackexchange.com/questions/221434/how-to-move-selected-keyframes-to-current-frame-or-a-specified-frame-number#221446

"Hovering on the action editor or dopesheet you can select the keyframes you want to move, set the time cursor to the desired frame and press S,X,0,Enter."

The final product is below. Fulldome (180 degree fisheye) 4K video suitable for planetariums. The full 4K resolution video is made available at archive.org






Tuesday, June 17, 2025

descreen and perspective correction with gimp

To remove moire patterns due to silk-screened photos from prints, gmic plugin's fourier filter can be used. To install - gimp-gmic is in software manager of Linux Mint.

https://www.youtube.com/watch?v=3137dDa6P4s

According to this video, the way to do it -
1. first fourier transform in gmic
2. Then make the white dots black for everything except centre
3. then inverse
4. then smooth medium (also in gmic)

(This page refers to a specific descreen plugin, but that is for Gimp 2.0 and the link also seems broken, so didn't try out that.)

For perspective correction, especially for "re-clicked" photos taken with mobile from an angle,

Tools --> Transform tools --> Perspective --> choose Corrective (backward) for Direction, 

put the four corners in the four corners of the photo to be corrected. Then, the four corners will be stretched to the aspect ratio of the original image. After that, some Transform --> Scale changes may be needed, with aspect ratio unlocked, in case the photo being corrected has a different aspect ratio. 

http working but not https

There was a report that one of our websites was down. Reported to PB, his reply was:

Found that http was working but not https

Then did the following:

root@ourserver:openssl s_client -connect our.domain.org:443 -servername our.domain.org

and got the error message
verify error:num=20:unable to get local issuer certificate

# comment out in  /etc/apache2/sites-enabled/our.domain.org-le-ssl.conf

# SSLCertificateChainFile /etc/letsencrypt/live/our.domain.org/chain.pem

and then restarted apache2

Friday, June 13, 2025

download and run an AWS VM snapshot locally on Virtualbox

We took a lot of time to accomplish this - run an AWS VM snapshot locally on Virtualbox and test if all the services are working - apache, mysql, dotnet, wordpress on php. Taking help from ChatGPT, the working method is outlined below.


2. Export the ami as vmdk - Copy-pasting below from Gemini's howto - (our initial attempts to do the export failed, as the role named "vmimport" had not been created with the required permissions. We could use an AWS cloud shell for the aws cli steps.) 

You can export an AWS AMI as a VMDK file using the AWS VM Import/Export service. This process involves exporting the AMI to an Amazon S3 bucket, from where you can then download the VMDK file.

Here's a detailed breakdown of the steps and important considerations:

1. Prerequisites and Limitations:

Before you start, be aware of these crucial points:

  • AWS CLI: You'll primarily use the AWS Command Line Interface (CLI) for this process, as it's the most common and robust way to manage VM Import/Export tasks. Ensure you have it installed and configured with appropriate AWS credentials.
  • S3 Bucket: You need an Amazon S3 bucket in the same AWS Region as your AMI to store the exported VMDK file.
  • IAM Role (vmimport): AWS VM Import/Export requires an IAM service role (typically named vmimport) with the necessary permissions to read from and write to your S3 bucket. If you don't have this, you'll need to create it. AWS documentation provides the required trust policy and permissions.
  • AMI Export Limitations:
    • Not all AMIs can be exported. You generally cannot export AMIs that contain third-party software provided by AWS (e.g., Windows or SQL Server images, or any AMI from the AWS Marketplace), unless they were originally imported as "Bring Your Own License" (BYOL) through VM Import/Export or AWS Application Migration Service (MGN)/AWS Elastic Disaster Recovery (AWS DRS).
    • You cannot export an AMI with encrypted EBS snapshots.
    • VMs with volumes larger than 1 TiB might have limitations (though for larger disks, you might need a manifest file).
    • You can't export an AMI if you've shared it from another AWS account.
    • You can't have multiple export image tasks in progress for the same AMI simultaneously.
    • Volumes attached after instance launch are not exported; only those specified in the AMI's block device mapping are included.
  • Consistent AMI: For the best results, ensure your AMI was created from a stopped instance or an application-consistent snapshot to avoid data corruption in the exported VMDK.

2. Steps to Export the AMI as VMDK:

a. Create an S3 Bucket (if you don't have one):

You can do this via the AWS Management Console or AWS CLI.

AWS Console:

  1. Go to the S3 service.
  2. Click "Create bucket."
  3. Provide a unique bucket name.
  4. Choose the same AWS Region as your AMI.
  5. Keep "Block all public access" enabled unless you have a specific reason to disable it (and understand the security implications).
  6. Click "Create bucket."

AWS CLI:

Bash
aws s3 mb s3://your-export-bucket-name --region your-aws-region

b. Create the IAM Role (if you don't have one):

This is crucial for allowing AWS VM Import/Export to interact with your S3 bucket.

  1. Create a trust policy file (trust-policy.json):
    JSON
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "vmie.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringEquals": {
              "sts:ExternalId": "vmimport"
            }
          }
        }
      ]
    }
    

2.1 Create the IAM role (vmimport):

bash aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json

3. Create an inline policy file (role-policy.json):

Replace your-export-bucket-name with your actual bucket name.

json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "ec2:DescribeTags", "ec2:CreateTags" ], "Resource": "*" } ] }

4. Attach the inline policy to the vmimport role:

bash aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

c. Start the Export Image Task:

Use the export-image command with the AWS CLI.

Bash
aws ec2 export-image \
    --image-id ami-0123456789abcdef0 \
    --disk-image-format VMDK \
    --s3-export-location S3Bucket=your-export-bucket-name,S3Prefix=exports/ \
    --description "Exported VMDK from MyWebApp AMI"
  • Replace ami-0123456789abcdef0 with the actual ID of your AMI.
  • Replace your-export-bucket-name with the name of your S3 bucket.
  • S3Prefix=exports/ is optional but good practice to organize your exported files within the bucket.
  • --disk-image-format VMDK specifies the output format. You can also choose VHD or RAW.

d. Monitor the Export Task:

The export process can take some time, depending on the size of your AMI. You can check the status using the describe-export-image-tasks command:

aws ec2 describe-export-image-tasks 

The Status field will show the progress (e.g., active, completed, deleting, deleted). Wait until the status changes to completed.


3. Download the vmdk - for this, I tried s3cmd which didn't work, based on this discussion. Just installing aws cli with the method outlined here, and 

aws s3 cp  s3://our-vm-export-2025/exports/export-ami-our-vmdk-file.vmdk localfile.vmdk

worked, after 

aws configure 

and entering the access key and secret key (which PB had to generate and give me). 

4. Running the vmdk and verifying it - It turned out that just creating a new VM with the VMDK file as the root file system - (under advanced) - would make it run, but the network interface provided by Virtualbox would be different from that provided by AWS EC2 - so, had to log on to the console. Since there was no password set (or I couldn't remember it), followed this chroot procedure to set a password for root user. For mounting the VMDK file, the easiest way was to go to another virtualbox vm, and in that vm, add this vmdk with a higher SCSI port number as an additional disk - under Settings - Storage for that VM. 

(qemu-img convert -f vmdk -O raw aws-image.vmdk aws-image.raw etc and mounting with a loop filesystem was very very time consuming, since our hard disk was slow. Hence, avoid this method.)

Then, we could log in, and as prompted by chatgpt, modify the cloud-init file - copy-pasting below - 

ip link # to check for interfaces without having ifconfig installed

#Update /etc/netplan/ (for Ubuntu 18.04+, or /etc/network/interfaces for older systems).

Example for Netplan (Ubuntu):

network:

  version: 2

  ethernets:

    enp0s3:

      dhcp4: true

In our case, there was an extra 

set-name: ens5

line which needed to be commented out. Then,

sudo netplan apply

ip a # to check if the interface is up, and if it is down,

sudo ip link set enp0s3 up

and now 

ip a 

should show that the interface has an ip address via dhcp. 

In this case, we tested the VM with NAT networking, with port forwarding set on Virtualbox Settings - Network - 2244 for ssh, 8080 for 80, 8443 for 443.

Then, setting /etc/hosts with the ip address of the host - 192.168.1.6 in this case for the various domains we wished to test, the dotnet app worked fine. Wordpress refused to connect - apparently NATted requests are refused by Wordpress by default, and we would need to make some changes to the config file to make it accept such requests to port 8443, like editing wp-config.php with the lines 

define('WP_HOME', 'https://our.domain.org:8443');

define('WP_SITEURL', 'https://our.domain.org:8443');