Sunday, November 30, 2025

tuning lossless ffmpeg command-line with Claude

The ffmpeg command

ffmpeg -y -r 30 -i in_%05d.jpg -i ../NormPeak-6dBENG.wav -c:v hevc_nvenc -preset lossless -global_quality 18 -pix_fmt nv12 -c:a aac -b:a 192k ~/Downloads/4kframes.mp4

gives the following warnings - 

[aist#1:0/pcm_s16le @ 0x59f7e96fbfc0] Guessed Channel Layout: stereo
....

[in#0/image2 @ 0x59f7e96f4e40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)

[swscaler @ 0x59f7e97ae500] deprecated pixel format used, make sure you did set range correctly

    Last message repeated 3 times

[hevc_nvenc @ 0x59f7e9701880] The selected preset is deprecated. Use p1 to p7 + -tune or fast/medium/slow.

[hevc_nvenc @ 0x59f7e9701880] Using global_quality with nvenc is deprecated. Use qp instead. 

Claude's suggestion to fix - 

ffmpeg -y -r 30 -i in_%05d.jpg -thread_queue_size 512 -i ../NormPeak-6dBENG.wav -c:v hevc_nvenc -preset p7 -tune lossless -cq 0 -pix_fmt yuv420p -c:a aac -b:a 192k ~/Downloads/4kframes.mp4

Friday, November 28, 2025

MIDI-mapper for OpenSpace (or any javascript API)

With the help of Claude.ai, I've put up a MIDI-mapper for OpenSpace at OpenSpace MIDI mapping.

The readme.txt lists the prompts used, and usage. Copy-pasting from my post on OpenSpace Slack: With this, we can control OpenSpace using any MIDI control surface or MIDI instrument. Interesting effects like speeding up time for one (MIDI) violin string, slowing down for another string, etc can be thought of.

I've used our KORG nanokontrol - earlier version of this product - to do the development and testing.

By adding suitable functions, this could be extended to call any javascript API using a MIDI controller or MIDI instrument.

Thursday, November 27, 2025

Bosch Wiper blades for Hyundai Santro

 PB helped me out with the following info:

Attached is the wiper blades size for Santro
List shows 20" on Driver side and 16" on Passenger side




TYPES OF BOSCH WIPERS
Bosch Clear Advantage — a flat-blade / beam-style wiper. Designed to give uniform pressure across the windshield, better contact, smoother wiping, and more streak-free clearing (especially good in heavy rain or at high speeds).

Bosch ECO (conventional / bracket-frame wiper)
 — a more traditional wiper with a metal (or mixed) frame + rubber blade. Usually more budget-friendly and covers basic everyday wiping needs.

AMAZON LINKS
Bosch Clear Advantage 16"
I have been using Bosch ECO, and it works well. 
Will try Clear Advantage next time.

And, found that there were video guides on how to fit the blades, in the amazon listings. Also available on youtube, at Bosch ECO and Clear Advantage.

On my (new) Santro, the 20" blade seems a bit short. Perhaps I'll try the 21-inch blade next time.

Friday, November 21, 2025

hardware accelerated video encoding on Intel Iris graphics has issues with jpg frames input - resolved

 From ChatGPT,

ffmpeg -i filename_%05d.jpg -c:v hevc_qsv -preset veryslow -global_quality 0 output_lossless_qsv.mkv

For 4K (4096x4096) files, this was running at around 8 fps on my i5 laptop, as against 2 fps for non-accelerated libx264. The mistake I made earlier was to put the preset before the -c:v.
  • -i must come before input options end.
  • All encoder options (-c:v, -preset, -global_quality) must be placed after the input and before the output filename.

If you place -preset before -i filename_%05d.jpg, FFmpeg thinks it’s a decoder option, causing the error:

Codec AVOption preset ... is not a decoding option.

With audio, we would need something like

ffmpeg -r 30 -i KG-LS_%05d.jpg -i audio.wav \

  -c:v hevc_qsv -preset veryslow -global_quality 0 \

  -c:a aac -b:a 192k \

  -shortest \

  output_with_audio.mkv

# or on Windows,

ffmpeg -r 30 -i KG-LS_%05d.jpg -i audio.wav ^

  -c:v hevc_qsv -preset veryslow -global_quality 0 ^

  -c:a aac -b:a 192k ^

  -shortest ^

  output_with_audio.mkv


Edit - this produces a low-bitrate video with colour issues on VLC (and perhaps on other players. The python front-end presets, like the command below, might be useful - 
FFmpeg command to copy-paste:
ffmpeg -y -i in_%05d.jpg -i map_x_directp2.pgm -i map_y_directp2.pgm -i weight_alpha_mask.png -filter_complex '[0:v][1:v][2:v]remap[remapped];[3:v]format=gray,scale=4096:4096,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];[remapped][mask_rgb]blend=all_mode=multiply[blended];[blended]scale=3840:2160,setsar=1,setdar=16/9,format=yuv420p[out]' -map '[out]' -map '0:a?' -c:v hevc_qsv -preset fast -crf 23 -pix_fmt yuv420p -c:a aac -b:a 128k D:/out-W4K.mp4

So, without warping, that would be - 
ffmpeg -y -i in_%05d.jpg -i audio.wav -c:v hevc_qsv -preset veryslow -crf 1 -pix_fmt yuv420p -c:a aac -b:a 128k out.mp4

But unfortunately, even the warping command produces a low bitrate video with colour issues. Green tinted instead of purple tinted for the earlier videos. Even changing the codec to libx265 did not help. So, some incompatibility in the input frames is suspected. Maybe I'll just use avidemux instead instead of trying to troubleshoot.

From ffmpeg -h
Getting help:
    -h      -- print basic options
    -h long -- print more options
    -h full -- print all options (including all format and codec specific options, very long)
    -h type=name -- print all options for the named decoder/encoder/demuxer/muxer/filter/bsf/protocol

ffmpeg -h encoder=hevc_qsv
....
  -preset            <int>        E..V....... (from 1 to 7) (default medium)
     veryfast        7            E..V.......
     faster          6            E..V.......
     fast            5            E..V.......
     medium          4            E..V.......
     slow            3            E..V.......
     slower          2            E..V.......
     veryslow        1            E..V.......

....
  -profile           <int>        E..V....... (from 0 to INT_MAX) (default unknown)
     unknown         0            E..V.......
     main            1            E..V.......
     main10          2            E..V.......
     mainsp          3            E..V.......
     rext            4            E..V.......
     scc             9            E..V.......


    
Claude.ai pointed out that the colour tint issues are probably due to wrong pixel format, and the low bitrate is probably because "QSV's CRF implementation is unreliable". So, suggestion is to use nv12 pixel format and global_quality instead of CRF, where quality of 15-20 is near lossless. Or force a high bitrate if that doesn't work - -b:v 20M -maxrate 25M

ffmpeg -y -r 30 -i in_%05d.jpg -i audio.wav -c:v hevc_qsv -preset veryslow -global_quality 18 -pix_fmt nv12 -c:a aac -b:a 128k out.mp4

And that solved the problem.

No colour tint, hevc_qsv encoder resulted in a 25 Mbps video, libx265 encoder with the same preset and quality settings but without the -pix_fmt nv12 resulted in a 7 Mbps video, both of good quality.

Sunday, November 09, 2025

Send user digests - Moodle task fails - sends hundreds of emails to Admin

On one of our Moodle instances, emails are supposed to go out through Google Workspace using Google SMTP servers. That being the case, we can't send an unlimited number of emails. So, email notifications had been globally - Site Administration > General > Messaging > Notification settings, which is located at server.url/admin/message.php

When there was a request to allow emails of a particular type to particular users, after enabling the email notification plugin at the above page, the next time a forum post was made, there was a cascade of Send user digests failed and Send user notifications failed messages.

For the immediate failure messages to stop, deleted all the adhoc tasks from mod_forum,

delete from prefix_task_adhoc  where prefix_task_adhoc.component = 'mod_forum';

I had previously set emailstop=1 on the database users table, for all users except the handful which should receive emails. Possibly the reason for the failures was that forcesubscribe=1 was set for many forums for many / all users. 

So, now I set it to zero (email disabled) for all forums on this instance, with

update prefix_forum set prefix_forum.forcesubscribe = 0;

and also updated the defaults,
Site administration > plugins > plugins overview > (chose mod_forum) > settings >

site.url/admin/settings.php?section=modsettingforum

and set subscription mode = subscription disabled for both defaults,
forum_announcementsubscription
and
forum_subscription 

Also, went to prefix_forum_subscriptions table and deleted all entries - which I suppose would be various users' subscriptions to various forums.

Hopefully this is sufficient to prevent the hundreds of failed send notification tasks. 

Update 13 Nov - Again received hundreds of task failed messages. Asked Claude.ai, and made the following changes as suggested, in addition to the above.

# Individual discussion subscriptions might still exist
DELETE FROM prefix_forum_digests;
DELETE FROM prefix_forum_discussion_subs;
UPDATE prefix_user SET maildigest = 0 WHERE emailstop = 1;
# Users might be getting auto-subscribed when they post
UPDATE prefix_forum SET trackingtype = 0;
TRUNCATE prefix_forum_track_prefs;

Tuesday, November 04, 2025

run 'Issue Certificates' task more often

There was a request for setting up email notifications for one Moodle course, a particular user to be notified when any user completes that course. I had earlier set many tasks to run less often due to our server being overloaded. 'Issue Certificates' task was one of them, especially since it needed some optimizations as per this thread. Checking the logs, I could see that issue certificates was taking between 30 and 120 seconds - so, increased the scheduled task cron frequency from once a day to once an hour, for two of our Moodle instances. 

Set to run every hour at the 30th minute (8:30, 9:30 etc) on one instance and at the 45th minute (8:45, 9:45, etc) on the other instance, in Site Administration > Server > Tasks > Scheduled tasks > \mod_customcert\task\issue_certificates_task 

Wednesday, October 29, 2025

installing python modules on Linux and Windows

For trying out LLM-generated python scripts for "warping", (still works in progress)

On Windows, various python modules can be installed with 
pip install modulename
where we sometimes have to do an internet search to find the exact modulename. 


pip install pillow

On my Linux Mint machine, where I've installed python using the default deb package with apt, 

ModuleNotFoundError: No module named 'imageio'
and when trying to install with pip,

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
    python3-xyz, where xyz is the package you are trying to
    install.

sudo apt install python3-imageio # this worked.

Sunday, October 19, 2025

handwriting recognition - ChatGPT is still the best

Uploading a page from my diary and checking how good the various LLMs are at handwriting recognition - ChatGPT is good for regular snaps.

ChatGPT -
JUN 7 FRI
 Social service instead of jogging as you're coming – making the Ashoka trees straight.
 Afternoon no class – we went to Mandir ~ 1:45, You came ~ 3:15. Vedaṁ, balloons, exerts, song – W/N pg writing & study this time Krishnamoorthy, IMI – problems in C...

Claude had done well at coding - better than ChatGPT and Gemini, but fared worse in handwriting recognition - Claude's version -
JUN 7 FRI
Bowl Service instead of dragging so forth coming - making the little trees sharply. Afternoon no class - we went to Nandu ~ 1:45, Su came ~ 3:15. Yadav, balloons, esotts, songs N/N pg writing & study his tone. Kishnamathy I/W - problems in C...

My corrected version - 
Social service instead of jogging as You're coming - making the Ashoka trees straight. Afternoon no class - we went to Mandir ~ 1:45 pm, You came ~ 3:15 pm. Vedam, balloons, escorts, song – WIN prg  during waiting at Mandir & study hours time. Krishnamoorthy, 1st MSc - problems in C...

Edit - 9 Nov - But when I edited together 5 pages to form a single long page, ChatGPT started hallucinating instead of transcribing!

Excerpt from ChatGPT's version of July 5-14 - 
JUL 5, FRI
 Toured Siddalingeshwar Plantation.
 Found good crop condition — 4½ acres of Amlan.
 Evening went to A.P.S. meeting at old students’ association.
 9:45 P.M. came home.
 Supper.

Actual corrected version,
"Pseudo heterodyning" - frustrating as could get only π phase change. Used signal analyzer to acquire data. S4 again seminars on 902...

Timetable again reverting to old schedule, 9 to 3, from Monday.




Saturday, October 18, 2025

claude is good at coding

Pleasantly surprised to find that Claude.ai, running the Sonnet 4.5 model, with just a free plan and a single prompt, created a usable UI for a timeline editor in html / javascript, when supplied with javascript helper functions copy-pasted along with the prompt - the initial commit is at




Edit - Subsequent commits were needed to make it actually functional, connect to OpenSpace, etc. using code copied from Jeff Nee,


The "finished product" is here,

Friday, October 03, 2025

hidden restrictions in Airtel's "unlimited 5G" plans

According to this youtube video, there are some terms and conditions which are shown only in the Airtel app, by which

  1. If Airtel thinks it is being used for commercial purposes, they can restrict the "unlimited 5G"
  2. If the data transferred exceeds 300 GB in a month, they can consider it "commercial"
  3. Tethered use of 5G data - using a mobile hotspot - is not included in the "unlimited 5G"
And one of the comments in that video pointed out that the reason why at some times the 4G data cap gets used up even though 5G is displayed in the status area on the top right of Android phones, is because of Airtel's use of 4G infrastructure to deliver 5G - in which case, the 4G billing gets used. And apparently, the difference can be noticed on Samsung phones with the 5G logo which is displayed. 

It seems 5G with a white background = true 5G.




And 5G with white text = 4G masquerading as 5G.

And this seems to be confirmed by this page, 
https://androidsrc.net/android-samsung-phone-symbols/




Sunday, September 28, 2025

Youtube 360 VR videos with a poor internet connection

Most of the time, our office internet connection can't support 2160s resolution for 360 VR videos on Youtube. My current solution is to download the youtube videos using yt-dlp - 

./yt-dlp_linux https://www.youtube.com/watch?v=yHQYd5i7CAE

and then watch.

internet problems with our old wifi router

Found that I had to restart the router often - sometimes more than once a day - when DNS would fail. So, instead of using the router's DNS, decided to use 8.8.8.8 - changed the IPv4 settings on the desktop (Linux Mint - "Advanced Network Configuration" - to use a static ip and added the DNS address. Did not make any changes on the router.

Tuesday, September 23, 2025

Google Apps Script - get the email of signed in user

There was a request to create a Google Apps Script which would do some spreadsheet lookups based on the email of the user, who would authenticate by signing in to the deployed web app - which would be deployed as having access to "Anyone with a Google account".

Apparently the script would work for anyone having an email id in the same workspace domain, but it would not work for users with generic @gmail.com email ids. 

Session.getActiveuser.getEmail() has the problem that it would need the script to run as the user who is logging in (and not as the creator of the script). Apparently this restriction does not apply for users within the same google workspace - 
"the user's email address is not available in any context that allows a script to run without that user's authorization, like a simple onOpen(e) or onEdit(e) trigger, a custom function in Google Sheets, or a web app deployed to "execute as me" (that is, authorized by the developer instead of the user). However, these restrictions generally do not apply if the developer runs the script themselves or belongs to the same Google Workspace domain as the user."

So, made a workaround - first, a script which will run as the user who is logging in, which will get the email - and then POST it to the second script, which will run as the creator of the script, which will process the information. At the POST step, we could add our own authentication using some salted hash etc. 

The first script would be something like - 

function doGet(e) {
  let userEmail = Session.getActiveUser().getEmail();
  let targetUrl = "https://script.google.com/macros/s/AKfycbyx4CbCNIpgmLd2iaWF4buFh_WZtIyOY3vpwdht_Gillycyflpgu1C2bN-j77P2CD3j/exec"; // Replace with your other script's URL

  let html = `
    <!DOCTYPE html>
    <html>
      <body>
        <p>You are signed in as <b>${userEmail}</b></p>
        <form method="post" action="${targetUrl}" target="_blank">
          <input type="hidden" name="userEmail" value="${userEmail}">
  <!-- add other hidden inputs with salted hash or whatever for extra authentication -->
          <button type="submit">Proceed</button>
        </form>
      </body>
    </html>
  `;

  return HtmlService.createHtmlOutput(html);
}

// With deployment access to "Anyone with a Google account"
// Execute As "The User accessing the web app" 


And the second script would have a doPost() like -

function doPost(e) {
// e.parameter contains form fields
let userEmail = e.parameter.userEmail;
// do whatever other parameter checks for extra authentication

// Call your function to fetch data
let result = getData(userEmail);

let html;
if (result && result.found) {
let data = result.data;
html = `
<!DOCTYPE html>
<html>
<body>
<h3>Data Details</h3>
<p><b>Data1:</b> ${data.Namefield1}</p>
<p><b>Data Num:</b> ${data.Num}</p>
<p><b>Data Program:</b> ${data.Program}</p>
</body>
</html>
`;
} else {
html = `
<!DOCTYPE html>
<html>
<body>
<p>No data found for <b>${userEmail}</b></p>
</body>
</html>
`;
}
return HtmlService.createHtmlOutput(html);
}

Monday, September 22, 2025

warping with ffmpeg - and some filter_complex examples

Since OCVWarp uses OpenCV remap() and there does exist a remap filter in ffmpeg also, revisited ffmpeg remap in order to make use of the accelerated NVidia codecs, which ffmpeg supports. Asked ChatGPT to generate python code to create the required map files, debugged the code manually, and put the code at

Currently it's just for the spherical mirror warping of fulldome content, but could potentially be extended to all of OCVWarp's functionality.

This ffmpeg command using the accelerated hevc_nvenc codec runs at 13 fps on a desktop running NVidia RTX 1060 as against 2.5 fps using OCVWarp saving to avc1 codec out.

ffmpeg -i input.mp4 -i map_x_directp2.pgm -i map_y_directp2.pgm -i weight_alpha_mask.png -filter_complex "
  [0:v]scale=3840:2160[scaled];
  [scaled][1:v][2:v]remap[remapped];
  [3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];
  [remapped][mask_rgb]blend=all_mode=multiply[out]
" -map "[out]" -map 0:a -c:v hevc_nvenc -preset p5 \
-cq 23 -rc vbr -maxrate 15M -bufsize 26M \
-c:a aac -b:a 128k output.mp4

Can potentially even create a UI using Tkinter in python, which would be cross-platform.

Some notes about the ffmpeg commands copy-pasted from the readme there - 

-ss for the start time should come before the input, and 
-t for the duration should come just before the filter_complex - 

For starting at two minutes and stopping after 10 seconds duration,
ffmpeg -ss 00:02:00 -i input.mp4 \
       -i map_x_directp2.pgm \
       -i map_y_directp2.pgm \
        -i weight_alpha_mask.png \
       -t 10 \
-filter_complex "
 [0:v]scale=3840:2160[scaled];
  [scaled][1:v][2:v]remap[remapped];
  [3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];
  [remapped][mask_rgb]blend=all_mode=multiply[out]
" -map "[out]" -map 0:a -c:v hevc_nvenc -c:a copy output.mp4


And a problematic video was fixed using an initial crop to 4096x4096 (although avidemux could show it as 4096x4096)

ffmpeg  -i "short_nometadata.mp4"        -i map_x_directp2.pgm        -i map_y_directp2.pgm         -i weight_alpha_mask.png      -filter_complex "[0:v]crop=4096:4096[cropped]; [cropped]scale=3840:2160[scaled]; [scaled][1:v][2:v]remap[remapped];[3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];
  [remapped][mask_rgb]blend=all_mode=multiply[out]
" -map "[out]" -map 0:a -c:v hevc_nvenc -preset p5 -cq 23 -rc vbr -maxrate 15M -bufsize 26M -c:a aac -b:a 128k short_w2.mp4

But unfortunately, the quality of the warped output is slightly lower than what is seen with OCVWarp - slightly less sharp. Perhaps changing the workflow to first remap and then resize (instead of resize to 3840x2160 first and then remap) might improve things.

Edit - 6 October 2025 - Yes - after doing the remap filter first, at the full resolution of the input video, and then the resizing to output resolution brings up the quality to as good or better than OCVWarp. ("Better" when OCVWarp's default avc1 video codec settings don't deliver as good quality as the hevc_nvenc settings above, I guess.) A separate repo has been created for this, incorporating a Python Tkinter UI,





Thursday, September 18, 2025

dd command to create linux boot disk

via https://linuxvox.com/blog/create-usb-boot-linux-mint/

sudo dd if='/path/to/linuxmint-22-cinnamon-64bit.iso' of=/dev/sdd bs=4M status=progress

2810183680 bytes (2.8 GB, 2.6 GiB) copied, 140 s, 20.1 MB/s

So, on my machine, it took less than three minutes to create a 2.8 GB boot disk.

Wednesday, September 17, 2025

Fujitsu mini PC not booting

I had a couple of Fujitsu mini PCs purchased cheaply, used from amazon - but neither of them would boot up now, a couple of years after I had used them last. Suspected the BIOS battery - used these videos to help with opening them up - similar, ours was Q920 without the DVD drive - 



Got the CR2032 lithium cell bios batteries also from Amazon, put in new batteries, and they booted up fine. 


Edwiser Bridge Plugin for Moodle

There was a request to install Edwiser Bridge Plugin for Moodle which would enable integration between Moodle and WordPress. 



I was spooked by some of these requirements, and also due to 

"vulnerability in Moodle that allows unauthenticated users to access sensitive user data through unauthenticated REST API calls. - fixed in 4.5.3"

So, I replied that I would not be responsible for security issues, if any, if this plugin is installed and asked for head of the institution approval if it was to be installed. The request was then withdrawn, so it was not installed.


Tuesday, September 16, 2025

the find command and mtime - finding disk space used by old files

 Asked ChatGPT, "linux commandline to list total disk space used of all files created or modified inside a directory more than one year ago" and got the following - 

You can do this with find + du or find + stat + awk.
Here’s a one-liner that lists the total disk space used by files in a directory older than one year:

find /path/to/dir -type f -mtime +365 -printf "%s\n" | \
awk '{sum+=$1} END {print sum/1024/1024 " MB"}'

Explanation:

  • find /path/to/dir → search inside that directory.

  • -type f → only files.

  • -mtime +365 → modified more than 365 days ago (roughly one year).

  • -printf "%s\n" → print file sizes in bytes.

  • awk → sums up sizes and prints total in MB (divide by 1024/1024).

If you prefer human-readable output directly:

find /path/to/dir -type f -mtime +365 -print0 | du --files0-from=- -ch | tail -n1

This gives you a total in du’s human-readable format.





 Asked ChatGPT, "tree view of files and directories larger than 1 GB from linux command line" and got the following - 

You can combine find, du, and tree-style output to list only files/directories larger than 1 GB in a tree view. Here are a couple of approaches:


1. Using find with tree

find /path/to/search -type f -size +1G -print | tree --fromfile
  • find ... -size +1G → finds files > 1 GB

  • tree --fromfile → prints a tree structure from the list

If you want directories too (based on contents size), use du instead (see below).


2. Using du for directories/files with sizes

du -h --threshold=1G --max-depth=3 /path/to/search | sort -h
  • --threshold=1G → only show entries larger than 1 GB

  • --max-depth=3 → how deep to go

  • sort -h → human-readable sort

This shows sizes and paths (not tree formatting, but size summary is clearer).


3. Pretty tree-like display with du

du -h --threshold=1G --max-depth=3 /path/to/search | sort -hr | 
awk '{print $2}' | tree --fromfile

This gives you a tree structure of directories/files whose total size is ≥ 1 GB.


✅ If you want only the largest offenders (say top 20 big files/directories over 1 GB), you can do:

find /path/to/search -type f -size +1G -exec du -h {} + | sort -hr | head -20

For this, I had to sudo apt install tree

Sunday, September 14, 2025

started using github projects for task tracking

How to use github projects - a getting started video - 
https://www.youtube.com/watch?v=oPQgFxHcjAw 

Opened a private repo with issues, and with ChatGPT's help, linked it to a project. (Projects --> New Project in the github web UI for that repo). Automation to create a new issue worked after some tweaking - the github actions yml is copy-pasted below:

name: Instantiate Planetarium Show Task
on:
  workflow_dispatch:
    inputs:
      show_title:
        description: 'Title of the Planetarium Show'
        required: true
      deadline:
        description: 'Deadline'
        required: false

permissions:
  issues: write
  contents: read
  
jobs:
  create_task_issue:
    runs-on: ubuntu-latest
    steps:
      - name: Build issue content
        run: |
          echo "## Show Details" > issue.md
          echo " " >> issue.md
          echo "**Show Title:**  " >> issue.md
          echo "${{ github.event.inputs.show_title }}" >> issue.md
          echo "" >> issue.md
          echo "**Deadline:**  " >> issue.md
          echo "${{ github.event.inputs.deadline }}" >> issue.md
          echo "" >> issue.md
          echo "## Steps" >> issue.md
          echo "" >> issue.md
          echo "- [ ] Create script" >> issue.md
          echo "- [ ] Create Hindi translation" >> issue.md
          echo "- [ ] Create Telugu translation" >> issue.md
          echo "- [ ] Review Hindi translation" >> issue.md
          echo "- [ ] Review Telugu translation" >> issue.md
          echo "- [ ] Record Hindi VO" >> issue.md
          echo "- [ ] Record Telugu VO" >> issue.md
          echo "- [ ] Edit Hindi VO" >> issue.md
          echo "- [ ] Edit Telugu VO" >> issue.md
          echo "- [ ] Warp ready for our theatre" >> issue.md
          echo "- [ ] Eng-Hin-Tel audio" >> issue.md
          echo "- [ ] Final audio mix" >> issue.md
          echo "- [ ] Save backups" >> issue.md
          echo "- [ ] Post on website" >> issue.md
          echo "" >> issue.md
          echo "---" >> issue.md
          echo "" >> issue.md
          echo "" >> issue.md

        
      - name: Create issue from template
        uses: peter-evans/create-issue-from-file@v4
        with:
          title: "[Show] ${{ github.event.inputs.show_title }}"
          content-filepath: issue.md
          token: ${{ secrets.GITHUB_TOKEN }}
          labels: show,task

This script had to be re-written due to ChatGPT repeatedly using heredoc syntax which would clash with github's yml. I had to explicitly ask it to "Please use echo >> on each line, and no heredoc."

One more automation script to automatically move the items in the project from "TODO" to "In Progress" when any of the checkboxes above were clicked, failed. ChatGPT usually fails when the documentation is not complete and / or there are conflicting earlier documentation pages, as in this case. It would repeatedly suggest using

permissions:
  issues: write
  projects: write

when actually such a permission is no longer accepted by Github - it was part of "classic projects" - https://docs.github.com/en/enterprise-server@3.16/issues/planning-and-tracking-with-projects/creating-projects/migrating-from-projects-classic

Apparently with projectsV2, the REST api does not work, only GraphQL API works. And it needs a classic personal access token to work. And the code generated by ChatGPT had errors which I could not get it to fix after multiple iterations over multiple days, so I just left updates to be done manually by clicking on the drop-down seen in the linked project for each issue. I could probably read the GraphQL API documentation and try again, but right now, not motivated to do so.

Tuesday, September 09, 2025

unable to use key-based auth

One of our servers had been upgraded, and after the upgrade, even though my public key was in the authorized_keys file, I was unable to ssh into that machine using a script with key-based authentication. 

It turned out the script which I was using to connect had the parameter
-o PubkeyAcceptedKeyTypes=ssh-rsa
which was necessary for the earlier server - earlier post about this - but not necessary for the current server - removing this parameter allowed the script to log in using key-based auth.

Monday, September 08, 2025

forcing 2FA for all cloudflare users

 After a cloudflare advisory asking us to rotate credentials due to a "Compromise Impacting Cloudflare Salesforce Cases", I was asked to force 2FA for all cloudflare users. 

https://developers.cloudflare.com/fundamentals/user-profiles/2fa/

The force-2FA option is available for super-admins in the cloudflare admin panel, at
Manage Account > Members > Members 2FA enforcement.  

Saturday, September 06, 2025

getting referral ids for Moodle course enrolments

There was a request to store and report the referral ids passed via GET request from another site onto one of our Moodle instances. I turned to ChatGPT (free) for help. 

For the immediate need of getting referral ids from past logs, it suggested some python code. Which, of course, did not work on the first try. But 5 iterations later, got something useful out of it, which is shared at
https://github.com/hn-88/moodle-plugin-local-referrals/tree/main/temporary%20python%20code 

Then, it suggested writing a local plugin, and also suggested skeleton code. But the suggested code would not work if the user was not logged in to our site before clicking the referral link on the external site. It suggested saving the referral code to $SESSION. But that did not seem to work. Debugging to the Apache error log using lines like 
error_log('[local_referrals] function on_user_enrolled() ... ');
the referral id in $SESSION was being set to null when the user was logging in to our site.

Tried asking gemini.google.com (free) since ChatGPT seemed to be caught in a loop and unable to provide a fix. Gemini immediately looked up Moodle documentation and said that clearing the session (starting a new session) when a user logs on to Moodle is a known feature - and so we should store the referral id elsewhere - either in the database or in a cookie. Gemini in "Deep Research" mode also wrote a couple of essays about the issue, which could be summarized as my earlier sentence!

Then Gemini generated some code, but the code was incomplete. Asked ChatGPT to complete it - then asked Gemini to correct the errors introduced by ChatGPT - and the result is at
https://github.com/hn-88/moodle-plugin-local-referrals/ 

Some excerpts from the chats with ChatGPT and Gemini are in the wiki of that github repo - 
https://github.com/hn-88/moodle-plugin-local-referrals/wiki

Friday, September 05, 2025

Microsoft Azure - no meaningful way to contact regarding technical issues for non-profits

We're running some VMs on Microsoft Azure with free credits provided to non-profits. This means that in the Azure support section on the Azure portal, only billing support is visible, not technical support - that would probably need a paid plan. 

There was an intermittent network issue with one of our VMs - Ubuntu update not being able to reach the servers, ssh into the VM failing intermittently, and so on. 

I changed the type of support from Technical to Subscription, and then was able to submit the report.

Some more data points for troubleshooting -
1. Our other Azure VMs in other regions do not have this issue.
2. Some of our team members had issues connecting to this server on Aug 19 2025, between 3 & 7 pm IST. At around 8:45 pm IST, I could log on to the server without issues, I used BSNL as my ISP. On Aug 20, 11:30 am, the team at Hyderabad could not connect to the VM using Jiofiber ISP, but I could connect via Telexair located in Bangalore.
3. Rebooting the VM itself did not help - probably some other routing issue at your Central India location.

Edit 25 Sep - But there was no response from Microsoft other than that they would look into it - but the problem seemed to have been resolved in a few hours. Twenty days later, there was an email from the VM team, saying that the network team had passed on the issue to the VM team, so I just replied, "We have not had issues after my last updated to the ticket. You can call me if you wish between 9 and 11.30 am Indian Standard time or between 12.30 and 3.30 pm IST, but I have nothing further to add."

Google's lack of human support is well known. So, Azure is more of the same. I believe this is the norm for all these "cloud" companies.

Thursday, August 28, 2025

resizing an Azure VM

Just going to portal.azure.com > VM > Overview > Sizing, choosing a different size, and hitting Resize did not work as planned, though a message popped up saying the VM would be restarted on resizing. After five minutes, stopped the VM - then it resized - then started it up again. MySQL took a couple of minutes to come up, so initially the Moodle instances complained about database connection failure.

Wednesday, August 13, 2025

using ABeeZee TTF font on a Moodle instance

There was a request to enable the ABeeZee TTF font on one of our Moodle instances, since it is recommended for inclusivity etc. https://fonts.google.com/specimen/ABeeZee/about says it is a "children's learning font".

It was later clarified that this font was desired for all content on the site and for the custom certificates. So,

1. For adding custom fonts to course content and body text, I believe the method is to edit the CSS of the current theme, as mentioned here - 

In our case, the theme is Moove.


2. For the custom certificate plugin, the method followed - adding the font using "Manage PDF fonts" using the TCPDF plugin, is the method we have followed earlier. 
So it should be working for creating custom certificates. 

For changing the font in the theme, it is relatively easy to implement if we were to change all content to use this font.

Site administration → Appearance → Additional HTML
and I've added inside the "Within Head" box

<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=ABeeZee&display=swap">

and inside the Moodle theme (which is used on our sites) advanced settings box, (which has the url oursite.org/admin/settings.php?section=themesettingmoove#theme_moove_advanced )

body {
    font-family: 'ABeeZee', sans-serif !important;
}

(which makes the entire body of the html page into ABeeZee.)

This was implemented for two of our sites after first trying it on our development instance.

Sunday, July 27, 2025

turn off gmail "smart" summaries

 via https://www.digitalinformationworld.com/2025/05/gmail-is-reading-and-summarizing-your.html

On Android --> GMail app --> Hamburger menu --> Settings --> choose the account for which you want to disable --> Untick smart features checkbox

Monday, July 21, 2025

lots of github and Mac learnings

Trying to compile OpenSpace on MacOS - 

Learnt a lot of things from Copilot and ChatGPT - since I didn't have any humans available to ask specific questions. Also bumped against the limitations of the LLMs - sometimes they give wrong advice with full confidence! Perhaps adding prompts to cite sources, mention confidence level etc might help. Some points, in no particular order.
  •  XCode support for some C++ features is missing - C++ language support - and in some cases, AppleClang's interpretation is stricter than gnu c++ or MSVC. Also, some flags specific to MacOS would change from old versions to new versions of MacOS. And homebrew support for older Macs is spotty. So, supporting Macs is a challenge.

  • Verifying and testing on github runners has some issues due to their headless nature.

  • Uploading to draft releases works best if that draft release is pre-created - sometimes the tag creation takes a few seconds, leading to failure in uploading the artifact to the draft release. Or, can create a new tag, wait for a few seconds, and then upload the artifact to draft release using a personal access token - PAT.  

  • MacOS, especially after 15.2, does not make it easy to run unsigned apps. Some workarounds which exist now, like removing the quarantine flag from the internal binary, may not work in the future. It is possible that Debug builds may work and (unsigned) Release builds may not work as in the case of Android apk files - unless we add a codesign step after the build.

  • Even with the bundling of dylibs using otool, the binary still seems to look for dylibs in /opt/homebrew etc. So, the bundling code suggested by ChatGPT still has flaws. Or maybe there are some dependency calls which are only at runtime and hence not detected by otool. 

Saturday, July 12, 2025

2FA required for Google Workspace admin users

Passed on Google's message that two-factor authentication / 2FA / 2-Step Verification (2SV) is required for Google Workspace admins starting August 13 2025 - 

.... please alert any of the users who have been given access to admin.google.com in case they have not enabled 2FA already.

How to enable 2FA:

Start by logging on to your Google account for our.domain.org

  • Click your avatar (top-right) and choose Manage your Google Account.
  • Click Security in the left pane.
  • In the Signing in to Google section, click 2-Step Verification.
  • Click the Get Started button.
  • Enter your password (if requested).
  • Click Continue.
  • Click Send to send a text message to your phone.
  • Enter the 6 digit code sent to your phone and click Next.
  • Click Turn On to finish the process.
  • Your account is now protected with 2 step verification.

Saturday, June 28, 2025

solution for rendering black frame & moving a Blender keyframe to a different time

The blend file linked from

would not render out of the box on Cycles or Eevee with Blender 4. (the tutorial was made for Blender 2.8). Thought that one possible reason was that I had modified the number of render samples:

"Samples should not be lower than volumetric samples."

But Cycles was rendering with multiple samples, up to the final moment, when the render would go black. ChatGPT confirmed my hunch that this was due to some compositing settings. Disabled compositing in Render properties --> Post-processing - and then the render was showing up. 4K renders of each frame was taking around 5 to 10 minutes on the 1060TX GPU. Sheepit render farm rendered the project in less than a day, after making modifications to the length of the animation and using a 180 degree fisheye "fulldome" panoramic camera.

For moving keyframes - to make the animation 10x longer, move a keyframe at frame 25 to frame 250, as an example - https://blender.stackexchange.com/questions/221434/how-to-move-selected-keyframes-to-current-frame-or-a-specified-frame-number#221446

"Hovering on the action editor or dopesheet you can select the keyframes you want to move, set the time cursor to the desired frame and press S,X,0,Enter."

The final product is below. Fulldome (180 degree fisheye) 4K video suitable for planetariums. The full 4K resolution video is made available at archive.org






Tuesday, June 17, 2025

descreen and perspective correction with gimp

To remove moire patterns due to silk-screened photos from prints, gmic plugin's fourier filter can be used. To install - gimp-gmic is in software manager of Linux Mint.

https://www.youtube.com/watch?v=3137dDa6P4s

According to this video, the way to do it -
1. first fourier transform in gmic
2. Then make the white dots black for everything except centre
3. then inverse
4. then smooth medium (also in gmic)

(This page refers to a specific descreen plugin, but that is for Gimp 2.0 and the link also seems broken, so didn't try out that.)

For perspective correction, especially for "re-clicked" photos taken with mobile from an angle,

Tools --> Transform tools --> Perspective --> choose Corrective (backward) for Direction, 

put the four corners in the four corners of the photo to be corrected. Then, the four corners will be stretched to the aspect ratio of the original image. After that, some Transform --> Scale changes may be needed, with aspect ratio unlocked, in case the photo being corrected has a different aspect ratio. 

http working but not https

There was a report that one of our websites was down. Reported to PB, his reply was:

Found that http was working but not https

Then did the following:

root@ourserver:openssl s_client -connect our.domain.org:443 -servername our.domain.org

and got the error message
verify error:num=20:unable to get local issuer certificate

# comment out in  /etc/apache2/sites-enabled/our.domain.org-le-ssl.conf

# SSLCertificateChainFile /etc/letsencrypt/live/our.domain.org/chain.pem

and then restarted apache2

Friday, June 13, 2025

download and run an AWS VM snapshot locally on Virtualbox

We took a lot of time to accomplish this - run an AWS VM snapshot locally on Virtualbox and test if all the services are working - apache, mysql, dotnet, wordpress on php. Taking help from ChatGPT, the working method is outlined below.


2. Export the ami as vmdk - Copy-pasting below from Gemini's howto - (our initial attempts to do the export failed, as the role named "vmimport" had not been created with the required permissions. We could use an AWS cloud shell for the aws cli steps.) 

You can export an AWS AMI as a VMDK file using the AWS VM Import/Export service. This process involves exporting the AMI to an Amazon S3 bucket, from where you can then download the VMDK file.

Here's a detailed breakdown of the steps and important considerations:

1. Prerequisites and Limitations:

Before you start, be aware of these crucial points:

  • AWS CLI: You'll primarily use the AWS Command Line Interface (CLI) for this process, as it's the most common and robust way to manage VM Import/Export tasks. Ensure you have it installed and configured with appropriate AWS credentials.
  • S3 Bucket: You need an Amazon S3 bucket in the same AWS Region as your AMI to store the exported VMDK file.
  • IAM Role (vmimport): AWS VM Import/Export requires an IAM service role (typically named vmimport) with the necessary permissions to read from and write to your S3 bucket. If you don't have this, you'll need to create it. AWS documentation provides the required trust policy and permissions.
  • AMI Export Limitations:
    • Not all AMIs can be exported. You generally cannot export AMIs that contain third-party software provided by AWS (e.g., Windows or SQL Server images, or any AMI from the AWS Marketplace), unless they were originally imported as "Bring Your Own License" (BYOL) through VM Import/Export or AWS Application Migration Service (MGN)/AWS Elastic Disaster Recovery (AWS DRS).
    • You cannot export an AMI with encrypted EBS snapshots.
    • VMs with volumes larger than 1 TiB might have limitations (though for larger disks, you might need a manifest file).
    • You can't export an AMI if you've shared it from another AWS account.
    • You can't have multiple export image tasks in progress for the same AMI simultaneously.
    • Volumes attached after instance launch are not exported; only those specified in the AMI's block device mapping are included.
  • Consistent AMI: For the best results, ensure your AMI was created from a stopped instance or an application-consistent snapshot to avoid data corruption in the exported VMDK.

2. Steps to Export the AMI as VMDK:

a. Create an S3 Bucket (if you don't have one):

You can do this via the AWS Management Console or AWS CLI.

AWS Console:

  1. Go to the S3 service.
  2. Click "Create bucket."
  3. Provide a unique bucket name.
  4. Choose the same AWS Region as your AMI.
  5. Keep "Block all public access" enabled unless you have a specific reason to disable it (and understand the security implications).
  6. Click "Create bucket."

AWS CLI:

Bash
aws s3 mb s3://your-export-bucket-name --region your-aws-region

b. Create the IAM Role (if you don't have one):

This is crucial for allowing AWS VM Import/Export to interact with your S3 bucket.

  1. Create a trust policy file (trust-policy.json):
    JSON
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "vmie.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringEquals": {
              "sts:ExternalId": "vmimport"
            }
          }
        }
      ]
    }
    

2.1 Create the IAM role (vmimport):

bash aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json

3. Create an inline policy file (role-policy.json):

Replace your-export-bucket-name with your actual bucket name.

json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::your-export-bucket-name", "arn:aws:s3:::your-export-bucket-name/*" ] }, { "Effect": "Allow", "Action": [ "ec2:DescribeTags", "ec2:CreateTags" ], "Resource": "*" } ] }

4. Attach the inline policy to the vmimport role:

bash aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

c. Start the Export Image Task:

Use the export-image command with the AWS CLI.

Bash
aws ec2 export-image \
    --image-id ami-0123456789abcdef0 \
    --disk-image-format VMDK \
    --s3-export-location S3Bucket=your-export-bucket-name,S3Prefix=exports/ \
    --description "Exported VMDK from MyWebApp AMI"
  • Replace ami-0123456789abcdef0 with the actual ID of your AMI.
  • Replace your-export-bucket-name with the name of your S3 bucket.
  • S3Prefix=exports/ is optional but good practice to organize your exported files within the bucket.
  • --disk-image-format VMDK specifies the output format. You can also choose VHD or RAW.

d. Monitor the Export Task:

The export process can take some time, depending on the size of your AMI. You can check the status using the describe-export-image-tasks command:

aws ec2 describe-export-image-tasks 

The Status field will show the progress (e.g., active, completed, deleting, deleted). Wait until the status changes to completed.


3. Download the vmdk - for this, I tried s3cmd which didn't work, based on this discussion. Just installing aws cli with the method outlined here, and 

aws s3 cp  s3://our-vm-export-2025/exports/export-ami-our-vmdk-file.vmdk localfile.vmdk

worked, after 

aws configure 

and entering the access key and secret key (which PB had to generate and give me). 

4. Running the vmdk and verifying it - It turned out that just creating a new VM with the VMDK file as the root file system - (under advanced) - would make it run, but the network interface provided by Virtualbox would be different from that provided by AWS EC2 - so, had to log on to the console. Since there was no password set (or I couldn't remember it), followed this chroot procedure to set a password for root user. For mounting the VMDK file, the easiest way was to go to another virtualbox vm, and in that vm, add this vmdk with a higher SCSI port number as an additional disk - under Settings - Storage for that VM. 

(qemu-img convert -f vmdk -O raw aws-image.vmdk aws-image.raw etc and mounting with a loop filesystem was very very time consuming, since our hard disk was slow. Hence, avoid this method.)

Then, we could log in, and as prompted by chatgpt, modify the cloud-init file - copy-pasting below - 

ip link # to check for interfaces without having ifconfig installed

#Update /etc/netplan/ (for Ubuntu 18.04+, or /etc/network/interfaces for older systems).

Example for Netplan (Ubuntu):

network:

  version: 2

  ethernets:

    enp0s3:

      dhcp4: true

In our case, there was an extra 

set-name: ens5

line which needed to be commented out. Then,

sudo netplan apply

ip a # to check if the interface is up, and if it is down,

sudo ip link set enp0s3 up

and now 

ip a 

should show that the interface has an ip address via dhcp. 

In this case, we tested the VM with NAT networking, with port forwarding set on Virtualbox Settings - Network - 2244 for ssh, 8080 for 80, 8443 for 443.

Then, setting /etc/hosts with the ip address of the host - 192.168.1.6 in this case for the various domains we wished to test, the dotnet app worked fine. Wordpress refused to connect - apparently NATted requests are refused by Wordpress by default, and we would need to make some changes to the config file to make it accept such requests to port 8443, like editing wp-config.php with the lines 

define('WP_HOME', 'https://our.domain.org:8443');

define('WP_SITEURL', 'https://our.domain.org:8443');