brew install mesa
But the same crash is observed. Claude says,
Mostly work related stuff which I would've entered into my "Log book". Instead of hosting it on an intranet site, outsourcing the hosting to blogger!
The ffmpeg command
ffmpeg -y -r 30 -i in_%05d.jpg -i ../NormPeak-6dBENG.wav -c:v hevc_nvenc -preset lossless -global_quality 18 -pix_fmt nv12 -c:a aac -b:a 192k ~/Downloads/4kframes.mp4
gives the following warnings -
[aist#1:0/pcm_s16le @ 0x59f7e96fbfc0] Guessed Channel Layout: stereo
....
[in#0/image2 @ 0x59f7e96f4e40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[swscaler @ 0x59f7e97ae500] deprecated pixel format used, make sure you did set range correctly
Last message repeated 3 times
[hevc_nvenc @ 0x59f7e9701880] The selected preset is deprecated. Use p1 to p7 + -tune or fast/medium/slow.
[hevc_nvenc @ 0x59f7e9701880] Using global_quality with nvenc is deprecated. Use qp instead.
Claude's suggestion to fix -
ffmpeg -y -r 30 -i in_%05d.jpg -thread_queue_size 512 -i ../NormPeak-6dBENG.wav -c:v hevc_nvenc -preset p7 -tune lossless -cq 0 -pix_fmt yuv420p -c:a aac -b:a 192k ~/Downloads/4kframes.mp4
PB helped me out with the following info:
From ChatGPT,
ffmpeg -i filename_%05d.jpg -c:v hevc_qsv -preset veryslow -global_quality 0 output_lossless_qsv.mkv
With audio, we would need something like
ffmpeg -r 30 -i KG-LS_%05d.jpg -i audio.wav \
-c:v hevc_qsv -preset veryslow -global_quality 0 \
-c:a aac -b:a 192k \
-shortest \
output_with_audio.mkv
# or on Windows,
ffmpeg -r 30 -i KG-LS_%05d.jpg -i audio.wav ^
-c:v hevc_qsv -preset veryslow -global_quality 0 ^
-c:a aac -b:a 192k ^
-shortest ^
output_with_audio.mkv
# Individual discussion subscriptions might still exist
DELETE FROM prefix_forum_digests;
DELETE FROM prefix_forum_discussion_subs;UPDATE prefix_user SET maildigest = 0 WHERE emailstop = 1;# Users might be getting auto-subscribed when they post
UPDATE prefix_forum SET trackingtype = 0;TRUNCATE prefix_forum_track_prefs;According to this youtube video, there are some terms and conditions which are shown only in the Airtel app, by which
ffmpeg -i input.mp4 -i map_x_directp2.pgm -i map_y_directp2.pgm -i weight_alpha_mask.png -filter_complex " [0:v]scale=3840:2160[scaled]; [scaled][1:v][2:v]remap[remapped]; [3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb]; [remapped][mask_rgb]blend=all_mode=multiply[out] " -map "[out]" -map 0:a -c:v hevc_nvenc -preset p5 \
-cq 23 -rc vbr -maxrate 15M -bufsize 26M \ -c:a aac -b:a 128k output.mp4
via https://linuxvox.com/blog/create-usb-boot-linux-mint/
sudo dd if='/path/to/linuxmint-22-cinnamon-64bit.iso' of=/dev/sdd bs=4M status=progress
2810183680 bytes (2.8 GB, 2.6 GiB) copied, 140 s, 20.1 MB/s
So, on my machine, it took less than three minutes to create a 2.8 GB boot disk.
Asked ChatGPT, "linux commandline to list total disk space used of all files created or modified inside a directory more than one year ago" and got the following -
You can do this with find + du or find + stat + awk.
Here’s a one-liner that lists the total disk space used by files in a directory older than one year:
find /path/to/dir -type f -mtime +365 -printf "%s\n" | \
awk '{sum+=$1} END {print sum/1024/1024 " MB"}'
find /path/to/dir → search inside that directory.
-type f → only files.
-mtime +365 → modified more than 365 days ago (roughly one year).
-printf "%s\n" → print file sizes in bytes.
awk → sums up sizes and prints total in MB (divide by 1024/1024).
If you prefer human-readable output directly:
find /path/to/dir -type f -mtime +365 -print0 | du --files0-from=- -ch | tail -n1
This gives you a total in du’s human-readable format.
Asked ChatGPT, "tree view of files and directories larger than 1 GB from linux command line" and got the following -
You can combine find, du, and tree-style output to list only files/directories larger than 1 GB in a tree view. Here are a couple of approaches:
find with treefind /path/to/search -type f -size +1G -print | tree --fromfile
find ... -size +1G → finds files > 1 GB
tree --fromfile → prints a tree structure from the list
If you want directories too (based on contents size), use du instead (see below).
du for directories/files with sizesdu -h --threshold=1G --max-depth=3 /path/to/search | sort -h
--threshold=1G → only show entries larger than 1 GB
--max-depth=3 → how deep to go
sort -h → human-readable sort
This shows sizes and paths (not tree formatting, but size summary is clearer).
dudu -h --threshold=1G --max-depth=3 /path/to/search | sort -hr |
awk '{print $2}' | tree --fromfile
This gives you a tree structure of directories/files whose total size is ≥ 1 GB.
✅ If you want only the largest offenders (say top 20 big files/directories over 1 GB), you can do:
find /path/to/search -type f -size +1G -exec du -h {} + | sort -hr | head -20
For this, I had to sudo apt install tree
How to use github projects - a getting started video -
https://www.youtube.com/watch?v=oPQgFxHcjAw
Opened a private repo with issues, and with ChatGPT's help, linked it to a project. (Projects --> New Project in the github web UI for that repo). Automation to create a new issue worked after some tweaking - the github actions yml is copy-pasted below:
name: Instantiate Planetarium Show Task
on:
workflow_dispatch:
inputs:
show_title:
description: 'Title of the Planetarium Show'
required: true
deadline:
description: 'Deadline'
required: false
permissions:
issues: write
contents: read
jobs:
create_task_issue:
runs-on: ubuntu-latest
steps:
- name: Build issue content
run: |
echo "## Show Details" > issue.md
echo " " >> issue.md
echo "**Show Title:** " >> issue.md
echo "${{ github.event.inputs.show_title }}" >> issue.md
echo "" >> issue.md
echo "**Deadline:** " >> issue.md
echo "${{ github.event.inputs.deadline }}" >> issue.md
echo "" >> issue.md
echo "## Steps" >> issue.md
echo "" >> issue.md
echo "- [ ] Create script" >> issue.md
echo "- [ ] Create Hindi translation" >> issue.md
echo "- [ ] Create Telugu translation" >> issue.md
echo "- [ ] Review Hindi translation" >> issue.md
echo "- [ ] Review Telugu translation" >> issue.md
echo "- [ ] Record Hindi VO" >> issue.md
echo "- [ ] Record Telugu VO" >> issue.md
echo "- [ ] Edit Hindi VO" >> issue.md
echo "- [ ] Edit Telugu VO" >> issue.md
echo "- [ ] Warp ready for our theatre" >> issue.md
echo "- [ ] Eng-Hin-Tel audio" >> issue.md
echo "- [ ] Final audio mix" >> issue.md
echo "- [ ] Save backups" >> issue.md
echo "- [ ] Post on website" >> issue.md
echo "" >> issue.md
echo "---" >> issue.md
echo "" >> issue.md
echo "" >> issue.md
- name: Create issue from template
uses: peter-evans/create-issue-from-file@v4
with:
title: "[Show] ${{ github.event.inputs.show_title }}"
content-filepath: issue.md
token: ${{ secrets.GITHUB_TOKEN }}
labels: show,task
This script had to be re-written due to ChatGPT repeatedly using heredoc syntax which would clash with github's yml. I had to explicitly ask it to "Please use echo >> on each line, and no heredoc."
One more automation script to automatically move the items in the project from "TODO" to "In Progress" when any of the checkboxes above were clicked, failed. ChatGPT usually fails when the documentation is not complete and / or there are conflicting earlier documentation pages, as in this case. It would repeatedly suggest using
permissions: issues: write projects: write
when actually such a permission is no longer accepted by Github - it was part of "classic projects" - https://docs.github.com/en/enterprise-server@3.16/issues/planning-and-tracking-with-projects/creating-projects/migrating-from-projects-classic.
Apparently with projectsV2, the REST api does not work, only GraphQL API works. And it needs a classic personal access token to work. And the code generated by ChatGPT had errors which I could not get it to fix after multiple iterations over multiple days, so I just left updates to be done manually by clicking on the drop-down seen in the linked project for each issue. I could probably read the GraphQL API documentation and try again, but right now, not motivated to do so.
After a cloudflare advisory asking us to rotate credentials due to a "Compromise Impacting Cloudflare Salesforce Cases", I was asked to force 2FA for all cloudflare users.
https://developers.cloudflare.com/fundamentals/user-profiles/2fa/
The force-2FA option is available for super-admins in the cloudflare admin panel, at
Manage Account > Members > Members 2FA enforcement.
via https://www.digitalinformationworld.com/2025/05/gmail-is-reading-and-summarizing-your.html
On Android --> GMail app --> Hamburger menu --> Settings --> choose the account for which you want to disable --> Untick smart features checkbox
To remove moire patterns due to silk-screened photos from prints, gmic plugin's fourier filter can be used. To install - gimp-gmic is in software manager of Linux Mint.
https://www.youtube.com/watch?v=3137dDa6P4s
According to this video, the way to do it -
1. first fourier transform in gmic
2. Then make the white dots black for everything except centre
3. then inverse
4. then smooth medium (also in gmic)
(This page refers to a specific descreen plugin, but that is for Gimp 2.0 and the link also seems broken, so didn't try out that.)
For perspective correction, especially for "re-clicked" photos taken with mobile from an angle,
Tools --> Transform tools --> Perspective --> choose Corrective (backward) for Direction,
put the four corners in the four corners of the photo to be corrected. Then, the four corners will be stretched to the aspect ratio of the original image. After that, some Transform --> Scale changes may be needed, with aspect ratio unlocked, in case the photo being corrected has a different aspect ratio.
Found that http was working but not httpsThen did the following:root@ourserver:openssl s_client -connect our.domain.org:443 -servername our.domain.organd got the error messageverify error:num=20:unable to get local issuer certificate# comment out in /etc/apache2/sites-enabled/our.domain.org-le-ssl.conf# SSLCertificateChainFile /etc/letsencrypt/live/our.domain.org/chain.pemand then restarted apache2
You can export an AWS AMI as a VMDK file using the AWS VM Import/Export service. This process involves exporting the AMI to an Amazon S3 bucket, from where you can then download the VMDK file.
Here's a detailed breakdown of the steps and important considerations:
1. Prerequisites and Limitations:
Before you start, be aware of these crucial points:
vmimport) with the necessary permissions to read from and write to your S3 bucket. If you don't have this, you'll need to create it. AWS documentation provides the required trust policy and permissions.2. Steps to Export the AMI as VMDK:
a. Create an S3 Bucket (if you don't have one):
You can do this via the AWS Management Console or AWS CLI.
AWS Console:
AWS CLI:
aws s3 mb s3://your-export-bucket-name --region your-aws-region
b. Create the IAM Role (if you don't have one):
This is crucial for allowing AWS VM Import/Export to interact with your S3 bucket.
trust-policy.json):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "vmie.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "vmimport"
}
}
}
]
}
2.1 Create the IAM role (vmimport):
bash aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
3. Create an inline policy file (role-policy.json):
Replace your-export-bucket-name with your actual bucket name.
json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "ec2:DescribeTags", "ec2:CreateTags" ], "Resource": "*" } ] }
4. Attach the inline policy to the vmimport role:
bash aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
c. Start the Export Image Task:
Use the export-image command with the AWS CLI.
aws ec2 export-image \
--image-id ami-0123456789abcdef0 \
--disk-image-format VMDK \
--s3-export-location S3Bucket=your-export-bucket-name,S3Prefix=exports/ \
--description "Exported VMDK from MyWebApp AMI"
ami-0123456789abcdef0 with the actual ID of your AMI.your-export-bucket-name with the name of your S3 bucket.S3Prefix=exports/ is optional but good practice to organize your exported files within the bucket.--disk-image-format VMDK specifies the output format. You can also choose VHD or RAW.d. Monitor the Export Task:
The export process can take some time, depending on the size of your AMI. You can check the status using the describe-export-image-tasks command:
aws ec2 describe-export-image-tasks The Status field will show the progress (e.g., active, completed, deleting, deleted). Wait until the status changes to completed.
3. Download the vmdk - for this, I tried s3cmd which didn't work, based on this discussion. Just installing aws cli with the method outlined here, and
aws s3 cp s3://our-vm-export-2025/exports/export-ami-our-vmdk-file.vmdk localfile.vmdk
worked, after
aws configure
and entering the access key and secret key (which PB had to generate and give me).
4. Running the vmdk and verifying it - It turned out that just creating a new VM with the VMDK file as the root file system - (under advanced) - would make it run, but the network interface provided by Virtualbox would be different from that provided by AWS EC2 - so, had to log on to the console. Since there was no password set (or I couldn't remember it), followed this chroot procedure to set a password for root user. For mounting the VMDK file, the easiest way was to go to another virtualbox vm, and in that vm, add this vmdk with a higher SCSI port number as an additional disk - under Settings - Storage for that VM.
(qemu-img convert -f vmdk -O raw aws-image.vmdk aws-image.raw etc and mounting with a loop filesystem was very very time consuming, since our hard disk was slow. Hence, avoid this method.)
Then, we could log in, and as prompted by chatgpt, modify the cloud-init file - copy-pasting below -
ip link # to check for interfaces without having ifconfig installed
#Update /etc/netplan/ (for Ubuntu 18.04+, or /etc/network/interfaces for older systems).
Example for Netplan (Ubuntu):
network:
version: 2
ethernets:
enp0s3:
dhcp4: true
In our case, there was an extra
set-name: ens5
line which needed to be commented out. Then,
sudo netplan apply
ip a # to check if the interface is up, and if it is down,
sudo ip link set enp0s3 up
and now
ip a
should show that the interface has an ip address via dhcp.
In this case, we tested the VM with NAT networking, with port forwarding set on Virtualbox Settings - Network - 2244 for ssh, 8080 for 80, 8443 for 443.
Then, setting /etc/hosts with the ip address of the host - 192.168.1.6 in this case for the various domains we wished to test, the dotnet app worked fine. Wordpress refused to connect - apparently NATted requests are refused by Wordpress by default, and we would need to make some changes to the config file to make it accept such requests to port 8443, like editing wp-config.php with the lines
define('WP_HOME', 'https://our.domain.org:8443');
define('WP_SITEURL', 'https://our.domain.org:8443');