Sunday, September 28, 2025

Youtube 360 VR videos with a poor internet connection

Most of the time, our office internet connection can't support 2160s resolution for 360 VR videos on Youtube. My current solution is to download the youtube videos using yt-dlp - 

./yt-dlp_linux https://www.youtube.com/watch?v=yHQYd5i7CAE

and then watch.

internet problems with our old wifi router

Found that I had to restart the router often - sometimes more than once a day - when DNS would fail. So, instead of using the router's DNS, decided to use 8.8.8.8 - changed the IPv4 settings on the desktop (Linux Mint - "Advanced Network Configuration" - to use a static ip and added the DNS address. Did not make any changes on the router.

Tuesday, September 23, 2025

Google Apps Script - get the email of signed in user

There was a request to create a Google Apps Script which would do some spreadsheet lookups based on the email of the user, who would authenticate by signing in to the deployed web app - which would be deployed as having access to "Anyone with a Google account".

Apparently the script would work for anyone having an email id in the same workspace domain, but it would not work for users with generic @gmail.com email ids. 

Session.getActiveuser.getEmail() has the problem that it would need the script to run as the user who is logging in (and not as the creator of the script). Apparently this restriction does not apply for users within the same google workspace - 
"the user's email address is not available in any context that allows a script to run without that user's authorization, like a simple onOpen(e) or onEdit(e) trigger, a custom function in Google Sheets, or a web app deployed to "execute as me" (that is, authorized by the developer instead of the user). However, these restrictions generally do not apply if the developer runs the script themselves or belongs to the same Google Workspace domain as the user."

So, made a workaround - first, a script which will run as the user who is logging in, which will get the email - and then POST it to the second script, which will run as the creator of the script, which will process the information. At the POST step, we could add our own authentication using some salted hash etc. 

The first script would be something like - 

function doGet(e) {
  let userEmail = Session.getActiveUser().getEmail();
  let targetUrl = "https://script.google.com/macros/s/AKfycbyx4CbCNIpgmLd2iaWF4buFh_WZtIyOY3vpwdht_Gillycyflpgu1C2bN-j77P2CD3j/exec"; // Replace with your other script's URL

  let html = `
    <!DOCTYPE html>
    <html>
      <body>
        <p>You are signed in as <b>${userEmail}</b></p>
        <form method="post" action="${targetUrl}" target="_blank">
          <input type="hidden" name="userEmail" value="${userEmail}">
  <!-- add other hidden inputs with salted hash or whatever for extra authentication -->
          <button type="submit">Proceed</button>
        </form>
      </body>
    </html>
  `;

  return HtmlService.createHtmlOutput(html);
}

// With deployment access to "Anyone with a Google account"
// Execute As "The User accessing the web app" 


And the second script would have a doPost() like -

function doPost(e) {
// e.parameter contains form fields
let userEmail = e.parameter.userEmail;
// do whatever other parameter checks for extra authentication

// Call your function to fetch data
let result = getData(userEmail);

let html;
if (result && result.found) {
let data = result.data;
html = `
<!DOCTYPE html>
<html>
<body>
<h3>Data Details</h3>
<p><b>Data1:</b> ${data.Namefield1}</p>
<p><b>Data Num:</b> ${data.Num}</p>
<p><b>Data Program:</b> ${data.Program}</p>
</body>
</html>
`;
} else {
html = `
<!DOCTYPE html>
<html>
<body>
<p>No data found for <b>${userEmail}</b></p>
</body>
</html>
`;
}
return HtmlService.createHtmlOutput(html);
}

Monday, September 22, 2025

warping with ffmpeg - and some filter_complex examples

Since OCVWarp uses OpenCV remap() and there does exist a remap filter in ffmpeg also, revisited ffmpeg remap in order to make use of the accelerated NVidia codecs, which ffmpeg supports. Asked ChatGPT to generate python code to create the required map files, debugged the code manually, and put the code at

Currently it's just for the spherical mirror warping of fulldome content, but could potentially be extended to all of OCVWarp's functionality.

This ffmpeg command using the accelerated hevc_nvenc codec runs at 13 fps on a desktop running NVidia RTX 1060 as against 2.5 fps using OCVWarp saving to avc1 codec out.

ffmpeg -i input.mp4 -i map_x_directp2.pgm -i map_y_directp2.pgm -i weight_alpha_mask.png -filter_complex "
  [0:v]scale=3840:2160[scaled];
  [scaled][1:v][2:v]remap[remapped];
  [3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];
  [remapped][mask_rgb]blend=all_mode=multiply[out]
" -map "[out]" -map 0:a -c:v hevc_nvenc -preset p5 \
-cq 23 -rc vbr -maxrate 15M -bufsize 26M \
-c:a aac -b:a 128k output.mp4

Can potentially even create a UI using Tkinter in python, which would be cross-platform.

Some notes about the ffmpeg commands copy-pasted from the readme there - 

-ss for the start time should come before the input, and 
-t for the duration should come just before the filter_complex - 

For starting at two minutes and stopping after 10 seconds duration,
ffmpeg -ss 00:02:00 -i input.mp4 \
       -i map_x_directp2.pgm \
       -i map_y_directp2.pgm \
        -i weight_alpha_mask.png \
       -t 10 \
-filter_complex "
 [0:v]scale=3840:2160[scaled];
  [scaled][1:v][2:v]remap[remapped];
  [3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];
  [remapped][mask_rgb]blend=all_mode=multiply[out]
" -map "[out]" -map 0:a -c:v hevc_nvenc -c:a copy output.mp4


And a problematic video was fixed using an initial crop to 4096x4096 (although avidemux could show it as 4096x4096)

ffmpeg  -i "short_nometadata.mp4"        -i map_x_directp2.pgm        -i map_y_directp2.pgm         -i weight_alpha_mask.png      -filter_complex "[0:v]crop=4096:4096[cropped]; [cropped]scale=3840:2160[scaled]; [scaled][1:v][2:v]remap[remapped];[3:v]format=gray,scale=3840:2160,colorchannelmixer=rr=1:gg=1:bb=1[mask_rgb];
  [remapped][mask_rgb]blend=all_mode=multiply[out]
" -map "[out]" -map 0:a -c:v hevc_nvenc -preset p5 -cq 23 -rc vbr -maxrate 15M -bufsize 26M -c:a aac -b:a 128k short_w2.mp4

But unfortunately, the quality of the warped output is slightly lower than what is seen with OCVWarp - slightly less sharp. Perhaps changing the workflow to first remap and then resize (instead of resize to 3840x2160 first and then remap) might improve things.

Edit - 6 October 2025 - Yes - after doing the remap filter first, at the full resolution of the input video, and then the resizing to output resolution brings up the quality to as good or better than OCVWarp. ("Better" when OCVWarp's default avc1 video codec settings don't deliver as good quality as the hevc_nvenc settings above, I guess.) A separate repo has been created for this, incorporating a Python Tkinter UI,





Thursday, September 18, 2025

dd command to create linux boot disk

via https://linuxvox.com/blog/create-usb-boot-linux-mint/

sudo dd if='/path/to/linuxmint-22-cinnamon-64bit.iso' of=/dev/sdd bs=4M status=progress

2810183680 bytes (2.8 GB, 2.6 GiB) copied, 140 s, 20.1 MB/s

So, on my machine, it took less than three minutes to create a 2.8 GB boot disk.

Wednesday, September 17, 2025

Fujitsu mini PC not booting

I had a couple of Fujitsu mini PCs purchased cheaply, used from amazon - but neither of them would boot up now, a couple of years after I had used them last. Suspected the BIOS battery - used these videos to help with opening them up - similar, ours was Q920 without the DVD drive - 



Got the CR2032 lithium cell bios batteries also from Amazon, put in new batteries, and they booted up fine. 


Edwiser Bridge Plugin for Moodle

There was a request to install Edwiser Bridge Plugin for Moodle which would enable integration between Moodle and WordPress. 



I was spooked by some of these requirements, and also due to 

"vulnerability in Moodle that allows unauthenticated users to access sensitive user data through unauthenticated REST API calls. - fixed in 4.5.3"

So, I replied that I would not be responsible for security issues, if any, if this plugin is installed and asked for head of the institution approval if it was to be installed. The request was then withdrawn, so it was not installed.


Tuesday, September 16, 2025

the find command and mtime - finding disk space used by old files

 Asked ChatGPT, "linux commandline to list total disk space used of all files created or modified inside a directory more than one year ago" and got the following - 

You can do this with find + du or find + stat + awk.
Here’s a one-liner that lists the total disk space used by files in a directory older than one year:

find /path/to/dir -type f -mtime +365 -printf "%s\n" | \
awk '{sum+=$1} END {print sum/1024/1024 " MB"}'

Explanation:

  • find /path/to/dir → search inside that directory.

  • -type f → only files.

  • -mtime +365 → modified more than 365 days ago (roughly one year).

  • -printf "%s\n" → print file sizes in bytes.

  • awk → sums up sizes and prints total in MB (divide by 1024/1024).

If you prefer human-readable output directly:

find /path/to/dir -type f -mtime +365 -print0 | du --files0-from=- -ch | tail -n1

This gives you a total in du’s human-readable format.





 Asked ChatGPT, "tree view of files and directories larger than 1 GB from linux command line" and got the following - 

You can combine find, du, and tree-style output to list only files/directories larger than 1 GB in a tree view. Here are a couple of approaches:


1. Using find with tree

find /path/to/search -type f -size +1G -print | tree --fromfile
  • find ... -size +1G → finds files > 1 GB

  • tree --fromfile → prints a tree structure from the list

If you want directories too (based on contents size), use du instead (see below).


2. Using du for directories/files with sizes

du -h --threshold=1G --max-depth=3 /path/to/search | sort -h
  • --threshold=1G → only show entries larger than 1 GB

  • --max-depth=3 → how deep to go

  • sort -h → human-readable sort

This shows sizes and paths (not tree formatting, but size summary is clearer).


3. Pretty tree-like display with du

du -h --threshold=1G --max-depth=3 /path/to/search | sort -hr | 
awk '{print $2}' | tree --fromfile

This gives you a tree structure of directories/files whose total size is ≥ 1 GB.


✅ If you want only the largest offenders (say top 20 big files/directories over 1 GB), you can do:

find /path/to/search -type f -size +1G -exec du -h {} + | sort -hr | head -20

For this, I had to sudo apt install tree

Sunday, September 14, 2025

started using github projects for task tracking

How to use github projects - a getting started video - 
https://www.youtube.com/watch?v=oPQgFxHcjAw 

Opened a private repo with issues, and with ChatGPT's help, linked it to a project. (Projects --> New Project in the github web UI for that repo). Automation to create a new issue worked after some tweaking - the github actions yml is copy-pasted below:

name: Instantiate Planetarium Show Task
on:
  workflow_dispatch:
    inputs:
      show_title:
        description: 'Title of the Planetarium Show'
        required: true
      deadline:
        description: 'Deadline'
        required: false

permissions:
  issues: write
  contents: read
  
jobs:
  create_task_issue:
    runs-on: ubuntu-latest
    steps:
      - name: Build issue content
        run: |
          echo "## Show Details" > issue.md
          echo " " >> issue.md
          echo "**Show Title:**  " >> issue.md
          echo "${{ github.event.inputs.show_title }}" >> issue.md
          echo "" >> issue.md
          echo "**Deadline:**  " >> issue.md
          echo "${{ github.event.inputs.deadline }}" >> issue.md
          echo "" >> issue.md
          echo "## Steps" >> issue.md
          echo "" >> issue.md
          echo "- [ ] Create script" >> issue.md
          echo "- [ ] Create Hindi translation" >> issue.md
          echo "- [ ] Create Telugu translation" >> issue.md
          echo "- [ ] Review Hindi translation" >> issue.md
          echo "- [ ] Review Telugu translation" >> issue.md
          echo "- [ ] Record Hindi VO" >> issue.md
          echo "- [ ] Record Telugu VO" >> issue.md
          echo "- [ ] Edit Hindi VO" >> issue.md
          echo "- [ ] Edit Telugu VO" >> issue.md
          echo "- [ ] Warp ready for our theatre" >> issue.md
          echo "- [ ] Eng-Hin-Tel audio" >> issue.md
          echo "- [ ] Final audio mix" >> issue.md
          echo "- [ ] Save backups" >> issue.md
          echo "- [ ] Post on website" >> issue.md
          echo "" >> issue.md
          echo "---" >> issue.md
          echo "" >> issue.md
          echo "" >> issue.md

        
      - name: Create issue from template
        uses: peter-evans/create-issue-from-file@v4
        with:
          title: "[Show] ${{ github.event.inputs.show_title }}"
          content-filepath: issue.md
          token: ${{ secrets.GITHUB_TOKEN }}
          labels: show,task

This script had to be re-written due to ChatGPT repeatedly using heredoc syntax which would clash with github's yml. I had to explicitly ask it to "Please use echo >> on each line, and no heredoc."

One more automation script to automatically move the items in the project from "TODO" to "In Progress" when any of the checkboxes above were clicked, failed. ChatGPT usually fails when the documentation is not complete and / or there are conflicting earlier documentation pages, as in this case. It would repeatedly suggest using

permissions:
  issues: write
  projects: write

when actually such a permission is no longer accepted by Github - it was part of "classic projects" - https://docs.github.com/en/enterprise-server@3.16/issues/planning-and-tracking-with-projects/creating-projects/migrating-from-projects-classic

Apparently with projectsV2, the REST api does not work, only GraphQL API works. And it needs a classic personal access token to work. And the code generated by ChatGPT had errors which I could not get it to fix after multiple iterations over multiple days, so I just left updates to be done manually by clicking on the drop-down seen in the linked project for each issue. I could probably read the GraphQL API documentation and try again, but right now, not motivated to do so.

Tuesday, September 09, 2025

unable to use key-based auth

One of our servers had been upgraded, and after the upgrade, even though my public key was in the authorized_keys file, I was unable to ssh into that machine using a script with key-based authentication. 

It turned out the script which I was using to connect had the parameter
-o PubkeyAcceptedKeyTypes=ssh-rsa
which was necessary for the earlier server - earlier post about this - but not necessary for the current server - removing this parameter allowed the script to log in using key-based auth.

Monday, September 08, 2025

forcing 2FA for all cloudflare users

 After a cloudflare advisory asking us to rotate credentials due to a "Compromise Impacting Cloudflare Salesforce Cases", I was asked to force 2FA for all cloudflare users. 

https://developers.cloudflare.com/fundamentals/user-profiles/2fa/

The force-2FA option is available for super-admins in the cloudflare admin panel, at
Manage Account > Members > Members 2FA enforcement.  

Saturday, September 06, 2025

getting referral ids for Moodle course enrolments

There was a request to store and report the referral ids passed via GET request from another site onto one of our Moodle instances. I turned to ChatGPT (free) for help. 

For the immediate need of getting referral ids from past logs, it suggested some python code. Which, of course, did not work on the first try. But 5 iterations later, got something useful out of it, which is shared at
https://github.com/hn-88/moodle-plugin-local-referrals/tree/main/temporary%20python%20code 

Then, it suggested writing a local plugin, and also suggested skeleton code. But the suggested code would not work if the user was not logged in to our site before clicking the referral link on the external site. It suggested saving the referral code to $SESSION. But that did not seem to work. Debugging to the Apache error log using lines like 
error_log('[local_referrals] function on_user_enrolled() ... ');
the referral id in $SESSION was being set to null when the user was logging in to our site.

Tried asking gemini.google.com (free) since ChatGPT seemed to be caught in a loop and unable to provide a fix. Gemini immediately looked up Moodle documentation and said that clearing the session (starting a new session) when a user logs on to Moodle is a known feature - and so we should store the referral id elsewhere - either in the database or in a cookie. Gemini in "Deep Research" mode also wrote a couple of essays about the issue, which could be summarized as my earlier sentence!

Then Gemini generated some code, but the code was incomplete. Asked ChatGPT to complete it - then asked Gemini to correct the errors introduced by ChatGPT - and the result is at
https://github.com/hn-88/moodle-plugin-local-referrals/ 

Some excerpts from the chats with ChatGPT and Gemini are in the wiki of that github repo - 
https://github.com/hn-88/moodle-plugin-local-referrals/wiki

Friday, September 05, 2025

Microsoft Azure - no meaningful way to contact regarding technical issues for non-profits

We're running some VMs on Microsoft Azure with free credits provided to non-profits. This means that in the Azure support section on the Azure portal, only billing support is visible, not technical support - that would probably need a paid plan. 

There was an intermittent network issue with one of our VMs - Ubuntu update not being able to reach the servers, ssh into the VM failing intermittently, and so on. 

I changed the type of support from Technical to Subscription, and then was able to submit the report.

Some more data points for troubleshooting -
1. Our other Azure VMs in other regions do not have this issue.
2. Some of our team members had issues connecting to this server on Aug 19 2025, between 3 & 7 pm IST. At around 8:45 pm IST, I could log on to the server without issues, I used BSNL as my ISP. On Aug 20, 11:30 am, the team at Hyderabad could not connect to the VM using Jiofiber ISP, but I could connect via Telexair located in Bangalore.
3. Rebooting the VM itself did not help - probably some other routing issue at your Central India location.

Edit 25 Sep - But there was no response from Microsoft other than that they would look into it - but the problem seemed to have been resolved in a few hours. Twenty days later, there was an email from the VM team, saying that the network team had passed on the issue to the VM team, so I just replied, "We have not had issues after my last updated to the ticket. You can call me if you wish between 9 and 11.30 am Indian Standard time or between 12.30 and 3.30 pm IST, but I have nothing further to add."

Google's lack of human support is well known. So, Azure is more of the same. I believe this is the norm for all these "cloud" companies.