Mostly work related stuff which I would've entered into my "Log book". Instead of hosting it on an intranet site, outsourcing the hosting to blogger!
Wednesday, April 30, 2025
development for Android / iOS - preliminaries
Monday, April 28, 2025
simple logger created by chatgpt
moving Virtualbox VMs to external drive
VirtualBox Manager
> Right-click on the VM
> Select "Move"
> Choose the new location and click "Move"
(Tho' the progress bar doesn't seem to work, it moved the VM and associated files to the external drive in approx the usual time it would take for a 12 GB file move.)
Sunday, April 27, 2025
Google's "free to try" AI Studio
Refining my prompt, changed it to "Generate a VR360 equirectangular video of sunset over a beach" which resulted in this:
Geekbench scores - Android phone, iPad, laptop, desktop, Raspberry Pi
Wednesday, April 23, 2025
airtel's horrendous "AI" customer service
I was trying to recharge another prepaid Airtel number with "international roaming plan 2997" for 365 day validity.
Tried with Airtel thanks app from my Airtel mobile - failed.
Tried from website - failed.
Tried calling customer care - they said try going to a shop.
Tried from shop - failed.
Tried sending them an email to 121@in.airtel.com - "We noticed that you're contacting us from an email ID that isn't linked to your Airtel account.
You shall write back to us from your registered ID or reach out to airtel thanksapp get instant support 24x7 right from the help and support section."
The help and support section is a chatbot. There didn't seem to be any option to contact a human being.
Finally, I thought maybe trying from the Airtel app on the actual prepaid account for which Intl. Roaming was needed - success. Edit - half an hour later, that transaction was reversed, too!
Airtel's own customer service people did not know this, the shop guys (tried two different shops) did not know this, it is not mentioned anywhere that these intl. roaming plans need to be bought from the phone which is actually going to roam!
UTM for OpenSpace on iOS / MacOS
Tuesday, April 22, 2025
cloudflare plugin for certbot
Found this option,
--dns-cloudflare Obtain certificates using a DNS TXT record (if you are using Cloudflare for DNS). (default: False)
from
https://eff-certbot.readthedocs.io/en/stable/man/certbot.html
And documentation for this plugin is at
https://certbot-dns-cloudflare.readthedocs.io/en/stable/index.html
(example invocation at end)
and how to install plugin is
sudo apt-get -y install python3-certbot-dns-cloudflare
https://installati.one/install-python3-certbot-dns-cloudflare-ubuntu-20-04/
Sunday, April 20, 2025
migrating web servers and associated scripts using rclone copy
As mentioned in the previous post, our server migration involved consolidating a web server into a DB server - we chose this method, since cloning the DB server was quick, while export followed by import of MySQL databases takes quite a bit longer as noted in the earlier 2024 migration post.
Copied the ssh private key for the destination server to the source server, and created an rclone remote using SFTP to the destination server.
zip -r -q webserver1.zip /path/to/webserver/files
(repeated this for all the web server directories later)
rclone copy webserver1.zip rcloneremote:/home/azureuser
(repeated this for all the other zip files later)
From the destination server,
unzip webserver1.zip
cd path/to/webserver/
sudo mv files /path/to/webserver
cd /path/to/webserver
sudo chown -R azureuser:www-data .
sudo chmod -R 775 .
----------------------------------------------
Then, the apache config files -
sudo zip -r -q etcapache.zip /etc/apache2
(rclone copy etc, and change ownership to root)
and the letsencrypt certificates etc,
sudo zip -r -q etclets.zip /etc/letsencrypt
(rclone copy etc, and change ownership to www-data for the file/directory which it needs, and the others to root)
-----------------------------------------------
With these, the files to be copied for the web servers were done. And I could start them up -
sudo a2ensite webserver1
sudo systemctl reload apache2
etc.
Also needed were
- installing all the necessary apache and php modules - which I had done earlier using the steps at the 2024 migration post.
- installing restic for backups as per the backups section of that post.
- rclone copy, individually, for the .sh backup scripts and the notifier directory.
- Copy-pasting lines from .config/rclone/rclone.config as noted in the restic backup script - the rclone config was modified by copy pasting the extra remotes from source to destination.
- Copy-pasting cron jobs from both crontab -e and sudo crontab -e
- Setting up postfix with gmail smtp as mentioned at this post for which
sudo apt remove ssmtp
sudo apt install mailutils postfix
and then editing the conf files was required.
sudo nano /etc/cron.d/awstats
Solution was to install ghostscript,
sudo apt install ghostscript
--------------------------------------------------------
Then the php max_file_upload also had to be modified according to this earlier post, https://hnsws.blogspot.com/2021/03/some-post-install-changes-needed-for.html
sudo nano /etc/php/8.3/apache2/php.ini
( Ctrl+w to find and change post_max_size
Ctrl+w to find and change upload_max_filesize
Ctrl+w to find and change max_execution_time - set to 600 for now.)
sudo systemctl restart apache2
and also did the same for /etc/php/8.3/cli/php.ini
-----------------------------------------------------------
When upgrading a plugin, a warning was shown that php extension soap was not installed, and that update will continue only after it is installed. So,
sudo apt install php-soap
-------------------------------------------------------------
migrating Azure VM from one tenant to another
https://medium.com/@morsi.masmoudi/complete-guide-to-cloning-and-moving-a-vm-from-one-azure-tenant-to-another-8fa65dd6955a
sudo -u www-data /usr/bin/php admin/cli/maintenance.php --enable
- verified that this does stop the cron also, which would otherwise hit the database every minute.
azcopy copy mydisksnapshot.vhd "https://blob-storage-url" will work, but takes double the time - azcopy works fine directly from snapshot url to blob storage url. Caveat is that we must make a container first, and then use azcopy copy.)
Saturday, April 19, 2025
clearing up after docker
Via https://stackoverflow.com/questions/76833923/safe-way-to-clean-var-lib-docker
/var/lib/docker was using up 16 GB.
sudo docker system prune --all --volumes
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all anonymous volumes not used by at least one container
- all images without at least one container associated to them
- all build cache
installed an alarm applet
On Linux Mint running Xfce, via https://forums.linuxmint.com/viewtopic.php?f=47&t=191832
Installed via Software Manager.
KAlarm needed to download 400 MB! So, instead chose Alarm-clock-applet which was only 563 kB. Shows up in the Applications menu as Alarm Clock. Can set time or timer.
Tuesday, April 15, 2025
moving a MySQL database from one host to another
Monday, April 14, 2025
ode to github actions
Just a quick note of appreciation for Github Actions. I've moved most of my software work to directly editing on github and building with github actions, due to a variety of reasons, like
- Relatively beefy build runners - much faster than the computers I have access to - completely free for public repositories, and generous allowances for free accounts' private repositories also.
- I can start builds from the office and check the results at home etc - the advantages of having everything online.
- Copilot's "Explain errors" makes it relatively easy to pinpoint and fix syntax errors and such.
- I don't have to use up the limited hard disk space on local machines for relatively large SDKs like Android SDK, Android Studio, etc.
- The issue tracker built into github makes it easy to keep track of bugs and fixes.
- All the advantages of git source management.
Friday, April 11, 2025
change submodule for a git repository
When trying to build OpenSpace for ARM64, came across issues with webbrowser ext - CEF binary being downloaded was cef_binary_102.0.10+gf249b2e+chromium-102.0.5005.115_linux64.tar.bz2 while we need ...linuxarm64.tar.bz2 for ARM64
In this case, this file seems to be responsible - https://github.com/OpenSpace/OpenSpace/blob/master/modules/webbrowser/cmake/cef_support.cmake
which we can fork and modify. In general, for other submodules not present in the main repo, the procedure seems to be:
How do I replace a git submodule with another repo? - Stack Overflow
If the location (URL) of the submodule has changed, then you can simply:
- Modify the .gitmodules file in the repo root to use the new URL.
- Delete the submodule folder in the repo rm -rf .git/modules/<submodule>.
- Delete the submodule folder in the working directory rm -rf <submodule>.
- Run git submodule sync. git submodule sync --recursive
- Run git submodule update.
More complete info can be found elsewhere:
If repo history is different then you need to checkout new branch manually:
git submodule sync --recursive
cd <submodule_dir>
git fetch
git checkout origin/master
git branch master -f
git checkout master
And apparently with Git 2.25 (Q1 2020), you can modify it.
git submodule set-url -- <path> <newurl>
git clone error and more - trying OpenSpace compile on Raspberry Pi
Thursday, April 10, 2025
prerequisites for running OpenSpace on Raspberry Pi
https://forums.raspberrypi.com/viewtopic.php?t=380368. That worked, no further warnings.
permissions needed to rename themes and plugins wordpress folders
One of our wordpress websites was not entering recovery mode.
... email from Wordpress regarding a fatal error while doing a setting in the menu using Astra theme. We had encountered such an error earlier also but could recover the site through the recovery link. However today on clicking on the recovery link it says recovery not initialized. ...
So, we had to disable plugins in order to be able to access the site. To follow the SSH/SFTP method described in this page - https://kinsta.com/knowledgebase/disable-wordpress-plugins/
But ... files could be opened but cannot rename since it says permission denied
So, just adding the ssh user to the www-data group was not enough -
sudo adduser thewordpresswebsiteadminuser www-data
also had to chmod the parent directory to 775 - was 755 earlier.
cd theparentdirectory
chmod -R 775 *
Then renaming worked.
Wednesday, April 09, 2025
errors with OpenSpace AppImage on Ubuntu 24.04
Tried running the OpenSpace AppImage on a fresh Ubuntu VM running atop Virtualbox.
Error - error while loading shared libraries: libjack.so.0: cannot open shared object file
sudo apt install libjack0
solved that issue.
But then,
(D) CefHost Remote WebBrowser debugging available on http://localhost:8088
[0408/141049.216978:FATAL:setuid_sandbox_host.cc(157)] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that /tmp/.mount_OpenSpHMdIhf/home/runner/source/OpenSpace/build/modules/webbrowser/ext/cef/cef_binary_102.0.10+gf249b2e+chromium-102.0.5005.115_linux64/Release/chrome-sandbox is owned by root and has mode 4755.
Trace/breakpoint trap (core dumped)
Tried running the AppImage as as root, could not connect to display - (should probably have tried pkexec as mentioned in a previous post, but instead, I tried to solve the appimage issue.)
Looks like this is due to apparmor - a bug report - https://askubuntu.com/questions/1512287/obsidian-appimage-the-suid-sandbox-helper-binary-was-found-but-is-not-configu
The --no-sandbox or --disable-setuid-sandbox methods did not work. But the apparmor method worked -
Created a file /etc/apparmor.d/openspaceappimage with the following content:
# This profile allows everything and only exists to give the
# application a name instead of having the label "unconfined"
abi <abi/4.0>,
include <tunables/global>
profile openspaceappimage /path/to/OpenSpace-0.20.1-1-x86_64.AppImage flags=(default_allow) {
userns,
# Site-specific additions and overrides. See local/README for details.
include if exists <local/openspaceappimage>
}
Rebooted the VM, then the appimage ran. But then, blank screen after loading etc.
The web-ui at http://localhost:4680 works.
OnlyEarth profile loads, but only at 0.2 fps. But that seems to be due to 3d acceleration not being enabled on the VM's display. For that, I suppose guest additions are needed? Anyway, shut down the VM, changed display RAM to 128 MB, enabled the 3d acceleration checkbox in Virtualbox VM settings, (I see that I had done so in an earlier post), booted up the VM, changed display resoltuion inside the VM by right-clicking on the desktop to 1920x1080, then ran OpenSpace, mostly OK.
Takes a minute or two to initialize GL, then load assets, till then black screen for OS.
The default display configuration doesn't seem to work - black screen. But gui portrait, single fisheye gui etc worked fine. With OnlyEarth profile - 100 fps average with occasional stuttering.
Tried the default profile also later - it took around 2 hours to download all the 'sync' files - and ran very very slow, not usable.
Tuesday, April 08, 2025
another tunnelling solution like cloudflare
Another product which offers ssh tunnels - earlier post about cloudflare's offering is https://hnsws.blogspot.com/2025/02/cloudflare-for-reverse-proxying-on.html
Twingate: It's time to ditch your VPN
via
5 reasons EVERYONE needs a home server
Saturday, April 05, 2025
free AI movie generation - only 5 sec is usually free
Trying out more resources for "free" AI generated movies - my earlier post about this is here - this video - Install WAN 2.1 The 100% FREE AI Video Generator : Very EASY & QUICK! - talks about installing
https://pinokio.computer/
(browser which allows to install and run various AI tools)
WAN2.1 is the actual video generator
And can do online for free,
https://wan21.net/ai-video-generator
but sometimes the queue takes a long time. Afternoon 1:30 IST, started rendering this prompt "A moonlit night sky with scattered clouds" rendered in around a minute, with only less than a minute in the queue. The resulting video is below. 5 credits a month for the free account, I think.
upscaling and resizing with chainner
Chainner - A node-based image processing GUI - seems to have a relatively easy-to-use installer - deb files for Linux, Windows installer, etc. And it has a dependency manager which can install the required dependencies. Tried it out, but unfortunately, it could not use the external stable diffusion api - maybe because I didn't run the web ui with API enabled? or is it that the API has changed and hence it gives errors like "invalid HTTP request"? Tried running chainner after running easydiffusion - could not connect.
api wiki says latest info is at :7860/docs
http://127.0.0.1:7860/docs
Video - how to upscale images with chainner -
Models from https://openmodeldb.info/
(old site was https://upscale.wiki/wiki/Main_Page#The_Model_Database )
Chose https://openmodeldb.info/models/4x-RealWebPhoto-v4-dat2
Took 10 minutes. Seems to be using cpu and not gpu
8x upscale with onnx took 24 minutes. Looks like water colour and not pointillist like the previous upscale with Stable Diffusion.
8x upscale with onnx using the model 4x-RealWebPhoto-v4-dat2 compared with the original image before scaling down and upscaling. Click on the image to see it larger.
Probably there may be some models which would do fast upscaling with realistic results (unlike the compact model below which seems to be optimised for anime).
Recommended models at
https://phhofm.github.io/upscale/favorites.html
Compact model - https://openmodeldb.info/models/2x-HFA2kCompact
For video, the youtube video above used video frame iterator, and the compact model. Now the interface has video input and video output nodes, which will automatically create video from the frames. Using the compact model, the video upscaling and processing was completed in just a few seconds, for 250 frames Baba. Looks a bit like anime / cartoon / water colour. If the numbers under the chainner nodes are the times taken, then 0.04 sec per frame, 9.8 sec for encoding to vid.
Edit: There are lots of youtube tutorials for video upscaling, including ones using colab - https://www.bing.com/videos/search?q=stable+diffusion+video+upscaling
Thursday, April 03, 2025
checking out Easy diffusion
Tried out Easy Diffusion, https://github.com/easydiffusion/easydiffusion
checking to see if it is faster than Stable Diffusion webui - my previous post on that is here.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs#linux
Some useful links - basics of inpainting - https://github.com/easydiffusion/easydiffusion/wiki/Inpainting
https://allprompts.com/stable-diffusion-prompts-for-realistic-photos/
So we should add words like national geographic, landscape, photo, photography
Easy Diffusion - better interface, and a bit faster - 45 sec instead of 60 sec to generate a 512x512 image, and just 7 sec to upscale the generated image. But there doesn't seem to be a way to upscale our own images, at least not seen in the UI. Trying the same thing with CPU on laptop, the 45 second generation on the 1060 graphics card took nearly 12 minutes with CPU!
Some example prompts and resulting images:
1. moonlit landscape -
2. photo of night sky with full moon tokyo skyline in silhoutte nikon national geographic
3. moonlit landscape, nikon, national geographic, full moon, trees (wish upscaling done 512 to 1024) -
run unsigned binaries on MacOS
I was not sure if unsigned binaries would run on MacOS, and whether one can override its behaviour. So, apparently MacOS 10.15 Sonoma onwards, unsigned binaries won't run by default, but we can specifically white-list any binary we want to run, and then it will run.
https://github.molgen.mpg.de/pages/bs/macOSnotes/mac/mac_procs_unsigned.html
https://discussions.apple.com/thread/254786001?sortBy=rank
Either via the GUI -
* Try to open myFancyBinary in Finder. This will fail.
*Open System Preferences, choose the Security control panel, select the General tab.
*Look for the message: “myFancyBinary was blocked from opening because it is not from an identified developer.”
*Click the Open Anyway button to the right of the message.
or via the terminal -
spctl --add /path/to/myBinary
And for getting the full path, and easy way is to drag and drop the binary from Finder into the Terminal window.
Stable Diffusion tests and benchmarks
Tuesday, April 01, 2025
experiments with LLMs and docker
and we need a different set of parameters in the command line if ollama is already installed on our system. So, got the openwebui to work with two modifications:
creating ai agents - just links
free plan is 1000 executions, 14 days.
via
Create Your FIRST AI Agent Today - youtube video
(uses paid n8n, paid chatgpt etc)
serpapi 100 searches per month in free plan - https://serpapi.com/pricing
n8n.io - can run locally also.
https://n8n.io/itops/ - using n8n - nodemation - to automate user creation etc. https://www.youtube.com/watch?v=ovlxledZfM4