Wednesday, April 30, 2025

development for Android / iOS - preliminaries

Seeing that the mobile devices were quite competitive with the desktop's processing power in a previous post, thought about working on porting various tools like OCVWarp to Android and iOS if feasible.

OpenCV Android - had earlier cloned / forked NDK samples.

The binaries are available here 
only up to OpenCV v3.4.3, but iOS framework seems to be available at
and

To check if swift playgrounds is available on 9th gen ipad - https://www.theverge.com/2021/6/15/22534902/ipad-pro-apple-swift-playgrounds-4-wwdc-2021

To check if example opencv apps can run on ipad.

Monday, April 28, 2025

simple logger created by chatgpt

I wanted to make a simple load-average logger. Thought of making ChatGPT do the typing, instead of me creating the simple code.

The result was quite OK.


The prompts and conversation are in the issues tab.

moving Virtualbox VMs to external drive

VirtualBox Manager
> Right-click on the VM
> Select "Move"
> Choose the new location and click "Move"

(Tho' the progress bar doesn't seem to work, it moved the VM and associated files to the external drive in approx the usual time it would take for a 12 GB file move.)

Sunday, April 27, 2025

Google's "free to try" AI Studio

Google AI Studio has image generation, video generation and more, in addition to text chat. But even Google's Gemini does not know this:) 

Unless we enable "auto-save" in the settings (I guess) our prompts are not saved in History.

Choosing the model Gemini2.0 Flash (image generation)
"Create an image of a sunset over a sandy beach" - result is below.


Not bad.

By clicking the Video generation tab, Veo 2 generated the following videos on the respective prompts. Not very good, but it's a good start. And the best part is that it's free. And it took less than a minute to generate.

"Create a video of overcast clouds over sunset on a sandy beach" - resulted in a timelapse-style video below:


"Generate a 360 degree equirectangular video of sunset over a beach" resulted in:

Since this was not really an equirectangular 360 degree video, tried by running the same prompt again. Then it gave:

 Refining my prompt, changed it to "Generate a VR360 equirectangular video of sunset over a beach" which resulted in this:

I didn't want any human beings or birds in the video, so I tried adding a negative prompt of birds, men, women - and the same prompt as above. But the result still had this woman,

All right. Maybe it will do better when given an image to work with. So, I uploaded the sunset image earlier generated with Gemini 2.0 Flash, and gave the prompt "Generate a VR360 equirectangular video from this image by panning left to right."
But it did not pan, instead added birds!

In summary, a long way to go. But since it's fast and free (the video generation did not specify any queries per day limits, but the Gemini models did specify 1500 requests per day and so on in the free tier), we can definitely try out some stuff. And perhaps upscale locally or with google colab. 







Geekbench scores - Android phone, iPad, laptop, desktop, Raspberry Pi

Thought of doing some benchmarks to rate the performance of the various devices to which I have access. Samsung Galaxy M34 Android phone, Raspberry Pi 4 8 GB without cooling fan, i5-4430 processor desktop with GTX 1050 graphics card, i5-1235U processor laptop with Intel graphics, 9th gen iPad.

https://www.geekbench.com/download/

Higher scores are better.

Example benchmarks also linked in the page above.

CPU single-core / multi-core - the Android phone was surprisingly close to the desktop. The iPad crashed this test, so I've not added it in the lineup below. The pi overheated without the cooling fan, hence it gave a worse multi-core performance than could be possible with better cooling.

Pi4 (292/519) < M34 (960/2072) < Desktop (1062/2813) < Laptop (2131/7460)

GPU OpenCL / Vulkan - the Pi4 gave an error, unknown CL platform, so it is not listed below. But the Pi5 seems to have a rating of only 96! And the iPad perhaps outperformed the Laptop.

M34 (2306/2339) < Laptop (11723/15237) < iPad (13914 (Metal)) < Desktop (22555/NA)

Conclusion - The Raspberry Pi 4 did much worse than I expected, and the mobile devices - the phone and the iPad - did better than I expected. 


Wednesday, April 23, 2025

airtel's horrendous "AI" customer service

I was trying to recharge another prepaid Airtel number with "international roaming plan 2997" for 365 day validity.

Tried with Airtel thanks app from my Airtel mobile - failed.

Tried from website - failed.

Tried calling customer care - they said try going to a shop.

Tried from shop - failed.

Tried sending them an email to 121@in.airtel.com - "We noticed that you're contacting us from an email ID that isn't linked to your Airtel account. 

You shall write back to us from your registered ID or reach out to airtel thanksapp get instant support 24x7 right from the help and support section."

The help and support section is a chatbot. There didn't seem to be any option to contact a human being. 

Finally, I thought maybe trying from the Airtel app on the actual prepaid account for which Intl. Roaming was needed - success. Edit - half an hour later, that transaction was reversed, too!  

Airtel's own customer service people did not know this, the shop guys (tried two different shops) did not know this, it is not mentioned anywhere that these intl. roaming plans need to be bought from the phone which is actually going to roam! 

UTM for OpenSpace on iOS / MacOS

Found on https://getutm.app/faq/#what-are-the-limitations
"The lack of hardware virtualization on Apple A-chips means that even for ARM code we must re-compile it with JIT. Therefore performance would never reach the levels possible with KVM. There is also no support for GPU virtualization so that means no DirectX or OpenGL. This makes most modern games non-playable." So, OpenSpace is most probably not suitable to run via UTM on iPad / iOS.


"UTM does not currently support GPU emulation/virtualization on Windows and therefore lacks support for 3D acceleration (e.g. OpenGL and DirectX). You may be able to run older games with software rendering options, but nothing with hardware acceleration. There is experimental support for hardware OpenGL acceleration on Linux through Virgl."
So, OpenSpace on Ubuntu might be doable - at least with performance similar to a Windows box without NVidia graphics - on a Mac with UTM.

---------------------------
Details of what didn't work

Found the limitations the hard way, trying for many hours on the (9th gen) iPad (since I don't have access to a Mac at the moment). 
UTM SE - with no JIT - took half an hour or more to boot Ubuntu 14.04 (that's not a typo, it really was the old version 14.04)

Trying to enable JIT - 

Directly on AltStore - need a Hackintosh or Mac for it? - https://github.com/altstoreio/AltStore/issues/1349

says use stikJIT - https://stikdebug.xyz/


Apparently enabling JIT is a one-time installation - https://faq.altstore.io/altstore-classic/enabling-jit

We can't use Altstore PAL since we're not in the EU - tried and failed.

To install a downloaded ipa, go to Altstore > My apps (at the bottom), and click the + on the top. The ipa should be visible. 

(if ipa is not visible, we may need to navigate using the files app, and "open with" Altstore.
Then, it asks for apple creds.)

Transferred the pairing file using icloud drive.

Long press on UTM, then "enable jit", then try to open the vm, then
error trying to connect to spice server.

If StosVPN is turned off, then AltJIT not found, can't run VMs without JIT.

FAILED TO CONNECT TO SPICE

To force quit, 
swipe up and hold to open app switcher, then swipe up on the app to force-quit it.

Jitstreamer EB on Debian guide is here,

To try - minimal Ubuntu or something like that without 3d acceleration, which may not need spice.

debian 11 ARM64 lxde with spicetools installed, from gallery.

It was a zip file. Open in Downloads > unzipped. Long press and open in UTM.

But same error, QEMU exited no error message.

UTM also says Virtualization is not supported on your device.
So we can only use Emulate.

Still, same error. Then, I tried UTM SE, which took half an hour to boot the old version of Ubuntu as noted at the top of this post.


Tuesday, April 22, 2025

cloudflare plugin for certbot

Found this option, 

--dns-cloudflare      Obtain certificates using a DNS TXT record (if you are using Cloudflare for DNS). (default: False)

from

https://eff-certbot.readthedocs.io/en/stable/man/certbot.html

And documentation for this plugin is at

https://certbot-dns-cloudflare.readthedocs.io/en/stable/index.html

(example invocation at end)

and how to install plugin is 

sudo apt-get -y install python3-certbot-dns-cloudflare

https://installati.one/install-python3-certbot-dns-cloudflare-ubuntu-20-04/

Sunday, April 20, 2025

migrating web servers and associated scripts using rclone copy

As mentioned in the previous post, our server migration involved consolidating a web server into a DB server - we chose this method, since cloning the DB server was quick, while export followed by import of MySQL databases takes quite a bit longer as noted in the earlier 2024 migration post.

Copied the ssh private key for the destination server to the source server, and created an rclone remote using SFTP to the destination server. 

zip -r -q webserver1.zip /path/to/webserver/files

(repeated this for all the web server directories later)

rclone copy webserver1.zip rcloneremote:/home/azureuser

(repeated this for all the other zip files later)

From the destination server,

unzip webserver1.zip

cd path/to/webserver/

sudo mv files /path/to/webserver

cd /path/to/webserver

sudo chown -R azureuser:www-data .

sudo chmod -R 775 .

----------------------------------------------

Then, the apache config files - 

sudo zip -r -q etcapache.zip /etc/apache2

(rclone copy etc, and change ownership to root)

and the letsencrypt certificates etc,

sudo zip -r -q etclets.zip /etc/letsencrypt

(rclone copy etc, and change ownership to www-data for the file/directory which it needs, and the others to root)

-----------------------------------------------

With these, the files to be copied for the web servers were done. And I could start them up - 
sudo a2ensite webserver1
sudo systemctl reload apache2
etc.

Also needed were

  • installing all the necessary apache and php modules - which I had done earlier using the steps at the 2024 migration post.
  • installing restic for backups as per the backups section of that post.
  • rclone copy, individually, for the .sh backup scripts and the notifier directory.
  • Copy-pasting lines from .config/rclone/rclone.config  as noted in the restic backup script - the rclone config was modified by copy pasting the extra remotes from source to destination.
  • Copy-pasting cron jobs from both crontab -e and sudo crontab -e
  • Setting up postfix with gmail smtp as mentioned at this post for which 
    sudo apt remove ssmtp
    sudo apt install mailutils postfix

    and then editing the conf files was required.
-----------------------------------------------

Then, we had awstats running, so 
sudo zip -r -q etcawstats.zip /etc/awstats
for the config files, and
sudo apt install awstats
with editing out the cron which is auto installed at
sudo nano /etc/cron.d/awstats

and then copying across the awstats data - the DataDir was set to this directory -  
sudo zip -r -q varlibawstats.zip /var/lib/awstats
(then rclone copy, then unzip on the destination machine, move and set appropriate permissions etc) 

----------------------------------------------------

Tested the backup and load alert notification scripts, checked awstats the next day to verify that the stats were updating correctly.

------------------------------------------------------
Edit: one week later - some more configurations which were missed out.

sudo apt install certbot python3-certbot-apache
sudo certbot --apache

This was sufficient to enable auto-updates for the letsencrypt SSL certificates which had been copied over from the old server.

On another server, onto which we had migrated a single domain, but which already had other active letsencrypt SSL encrypted websites,

sudo certbot --cert-name subdomain.ourdomain.org
for renewing only this one, without disturbing the others.

For removing a certificate later, we can use
certbot delete --cert-name example.com
[so you will need to know the exact cert-name - not the specific FQDN(or domain name) within the cert]
[you can get the cert names with: certbot certificates]


-----------------------------------------------------------

One of the Moodle instances gave an error, Adhoc task failed: assignfeedback_editpdf\task\convert_submission,The configured path to ghostscript is not correctly set: The ghostscript path points to a non-existent file

Solution was to install ghostscript,
sudo apt install ghostscript

--------------------------------------------------------

Then the php max_file_upload also had to be modified according to this earlier post, https://hnsws.blogspot.com/2021/03/some-post-install-changes-needed-for.html

sudo nano /etc/php/8.3/apache2/php.ini

( Ctrl+w to find and change post_max_size

Ctrl+w to find and change upload_max_filesize

Ctrl+w to find and change max_execution_time  - set to 600 for now.)

sudo systemctl restart apache2

and also did the same for /etc/php/8.3/cli/php.ini

-----------------------------------------------------------

When upgrading a plugin, a warning was shown that php extension soap was not installed, and that update will continue only after it is installed. So,

sudo apt install php-soap

-------------------------------------------------------------

migrating Azure VM from one tenant to another

Azure documentation talks about migrating a VM from one region to another region - https://learn.microsoft.com/en-us/azure/resource-mover/tutorial-move-region-virtual-machines


but with some changes - instead of a file share, creating a disk from snapshot seems to need a blob container for the "Create Managed Disk" applet nowadays. Following the exact steps as above results in not finding any container in that storage account.

Shifted using the following steps:

1. Enabled CLI maintenance mode on the Moodle instances with 
sudo -u www-data /usr/bin/php admin/cli/maintenance.php --enable
- verified that this does stop the cron also, which would otherwise hit the database every minute.

2. Shut down the database VM, went to its OS disk, created snapshot, went to the snapshot's page, "Export snapshot", copied the export snapshot url. 

3. I had earlier created a test VM in the destination tenant, in the Central India region which is the cheapest at present. Had also created a storage account, and a blob container. Also, had installed azcopy on that VM with the following - https://www.thomasmaurer.ch/2019/05/how-to-install-azcopy-for-azure-storage/
#Download AzCopy
wget https://aka.ms/downloadazcopy-v10-linux
 
#Expand Archive
tar -xvf downloadazcopy-v10-linux
 
#(Optional) Remove existing AzCopy version
sudo rm /usr/bin/azcopy
 
#Move AzCopy to the destination you want to store it
sudo cp ./azcopy_linux_amd64_*/azcopy /usr/bin/


4. Created SAS token (Shared Access Signature) from Azure portal > thestorageaccount - the default 3600 seconds may be too short if we need to do multiple operations. Enabled container, blob and file, all three, and copied the blob url and sas token strings. The blob url would be of the form https://storageaccountname.blob.core.windows.net/?sv=SAS-token - we need to edit this, and add nameofourcontainer/desired-name-of-file.vhd before the ?sv=SAS-token

5. SSH'd into that VM, and ran azcopy. (I could have avoided creating the VM, installing azcopy etc by using an Azure shell instead.) The command will be of the form:
azcopy copy "https://md-abcefg12nmg.z12.blob.storage.azure.net/snapshot/abcd?sv=2018-03-28&sr=b&si=this-is-the-snapshot-url-1234566b61d3&sig=zj9jh0ABCDEDFGHHJtnkD9yK0rvJ44I%3D" "https://lmsmigratestorage.blob.core.windows.net/testcontainer/LMSDBsnapshot.vhd?sv=2024-11-04&ss=bfqt&srt=sco&sp=abcdefghytfx&se=2025-04-21T11:49:12Z&st=2025-04-20T03:49:12Z&spr=https&sig=qwertyuiopasdfghjkABCDEFGuLYlgBQlWZE%3D"

# Actually ran steps 2 and 5 three times, azcopy took the following times:
# Elapsed Time (Minutes): 2.034 <-- LMS db OS disk
# Elapsed Time (Minutes): 1.2004 <-- LMS php disk
# Elapsed Time (Minutes): 22.0734 <-- LMS data disk
 
6. From Azure portal, in the destination tenant, Home > Create > Managed Disks > (Choosing Source as Blob storage) > Browse > choosing our VHD snapshot file in blob storage. Creating disk from snapshot, creating snapshot etc are very fast operations, these operations complete in less than a minute.  

7. From the created disk, if it is an OS disk, Create VM. Note that in our case, the default "Operating System type" was v1, but our original VM was v2, so I created the new VM also as v2. 

8. The data disk can be attached later to the VM, even when it is running - "Attach existing disk". Caution - attaching an OS disk as data disk does not seem to work properly - that new OS disk becomes the primary OS disk! 

9. In our case, we were cloning the db VM and copying the php files (and postgresql db) from the php server into the db VM, since we wanted to consolidate the two VMs into a single one. So, I had to copy over from the old VM to the new VM using rclone copy. What I needed to copy, and the steps taken, will be a separate post.

(What does not work:
1. "Export disk" and trying to use that using azure cli - this will not work. We need to create a snapshot first, then export the snapshot, and use the snapshot to re-create the disk in the new tenant.
2. Mounting azure blob storage on a VM and using wget to copy the snapshot VHD to blob storage will not help us to create the disk in the new tenant - azcopy is mandatory.
3. wget -O mydisksnapshot.vhd "https://snapshot-export-URL" followed by
azcopy copy mydisksnapshot.vhd "https://blob-storage-url" will work, but takes double the time - azcopy works fine directly from snapshot url to blob storage url. Caveat is that we must make a container first, and then use azcopy copy.) 

Saturday, April 19, 2025

clearing up after docker

 Via https://stackoverflow.com/questions/76833923/safe-way-to-clean-var-lib-docker

/var/lib/docker was using up 16 GB. 

sudo docker system prune --all --volumes

WARNING! This will remove:

  - all stopped containers
  - all networks not used by at least one container
  - all anonymous volumes not used by at least one container
  - all images without at least one container associated to them
  - all build cache

Total reclaimed space: 7.794GB


installed an alarm applet

 On Linux Mint running Xfce, via https://forums.linuxmint.com/viewtopic.php?f=47&t=191832

Installed via Software Manager.

KAlarm needed to download 400 MB! So, instead chose Alarm-clock-applet which was only 563 kB. Shows up in the Applications menu as Alarm Clock. Can set time or timer.

Tuesday, April 15, 2025

moving a MySQL database from one host to another

We had earlier moved a Moodle install from one server to another, and now wanted to move its database also to localhost.

Using this to set up mysql, creating database + user, and later testing a connection - 

sudo mysql -p worked for logging in as root user.

create database our_master_db;

CREATE USER 'our_db_admin'@'%' IDENTIFIED BY 'OurPassword';
GRANT ALL ON our_master_db.* TO 'our_db_admin'@'%';

to export and restore dbs.

date; mysql -h 127.0.0.1 -u our_db_admin -p our_master_db < our_master_db.OldServerdb-2024.15-04-2025.sql; date

This showed that it completed in some 20 minutes. After that, needed to change the connection settings and port in Moodle's config.php

Monday, April 14, 2025

ode to github actions

Just a quick note of appreciation for Github Actions. I've moved most of my software work to directly editing on github and building with github actions, due to a variety of reasons, like

  1. Relatively beefy build runners - much faster than the computers I have access to - completely free for public repositories, and generous allowances for free accounts' private repositories also.
  2. I can start builds from the office and check the results at home etc - the advantages of having everything online.
  3. Copilot's "Explain errors" makes it relatively easy to pinpoint and fix syntax errors and such.
  4. I don't have to use up the limited hard disk space on local machines for relatively large SDKs like Android SDK, Android Studio, etc.
  5. The issue tracker built into github makes it easy to keep track of bugs and fixes.
  6. All the advantages of git source management.
For example, I'm trying out Android NDK for exploring OCVWarp or OpenSpace porting to Android / iOS, and all the samples finished building in just 15 minutes.

Friday, April 11, 2025

change submodule for a git repository

 When trying to build OpenSpace for ARM64, came across issues with webbrowser ext - CEF binary being downloaded was cef_binary_102.0.10+gf249b2e+chromium-102.0.5005.115_linux64.tar.bz2 while we need ...linuxarm64.tar.bz2 for ARM64

In this case, this file seems to be responsible - https://github.com/OpenSpace/OpenSpace/blob/master/modules/webbrowser/cmake/cef_support.cmake

which we can fork and modify. In general, for other submodules not present in the main repo, the procedure seems to be: 

How do I replace a git submodule with another repo? - Stack Overflow

If the location (URL) of the submodule has changed, then you can simply:

  1. Modify the .gitmodules file in the repo root to use the new URL.
  2. Delete the submodule folder in the repo rm -rf .git/modules/<submodule>.
  3. Delete the submodule folder in the working directory rm -rf <submodule>.
  4. Run git submodule sync. git submodule sync --recursive
  5. Run git submodule update.

More complete info can be found elsewhere:

Changing remote repository for a git submodule

If repo history is different then you need to checkout new branch manually:

git submodule sync --recursive
cd <submodule_dir> 
git fetch
git checkout origin/master
git branch master -f
git checkout master

And apparently with Git 2.25 (Q1 2020), you can modify it.

git submodule set-url -- <path> <newurl>

git clone error and more - trying OpenSpace compile on Raspberry Pi

An initial trial - just to see what breaks - trying to git clone OpenSpace and build it on a Raspberry Pi 4.

cloning into 'OpenSpace'...
error: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408
fatal: expected 'packfile'

Apparently, this was due to an apt upgrade happening in another terminal window. Tried again later, and the git clone succeeded.

Now the error is during the generation of cmake files, regarding not finding gcc-13 - compiling gcc may take 6 hours.


https://gcc.gnu.org/install/
make bootstrap-lean to use less disk space.

Maybe I'll first try on a github runner to look out for ARM errors - 
"We expect to begin offering Arm runners for open source projects by the end of the year. "

"The arm64 Linux runner is in public preview and subject to change."

libcef seems to be available for arm64, but some bugs for RPi

Edit - arm64 build fails due to wrong CEF binary - will need to fork the project, make modifications ... 

Thursday, April 10, 2025

prerequisites for running OpenSpace on Raspberry Pi

Initially, when I powered up the Pi, it showed a low-voltage warning on the Desktop (which I was accessing via RealVNC.) Disconnecting the powered USB hub, booting up with a 60W mobile charger - still the same warning. Then tried switching the USB-C cable to the charger's thicker cable as per
https://forums.raspberrypi.com/viewtopic.php?t=380368. That worked, no further warnings.

Check if mesa (software opengl rendering) is already the latest,

On earlier pis, it used to be Fake KMS (KMS = Kernel Module Setting.)


About OpenGL compatibility -   
The VideoCore VII is capable of OpenGL ES 3.1 and Vulkan 1.2 according to

and according to
OpenGL 3.3 is required.

So, we might need to use mesa software rendering,



(TPM mentioned above, is Trusted Platform Module.)



Another possibility may be to cross compile.


Error:
apt install fuse led to
The following packages will be removed:
fuse3 gvfs-fuse ntfs-3g xdg-desktop-portal xdg-desktop-portal-gtk xdg-desktop-portal-wlr
The following NEW packages will be installed:
  fuse
0 upgraded, 1 newly installed, 6 to remove and 4 not upgraded.

But even with that, the same fuse error for the x86-64 appimage running on box64. So, put back fuse3, ntfs-3g, and others. The warnings / errors seen are:

[BOX64] Binary search path: ./:bin/:/usr/local/sbin/:/usr/local/bin/:/usr/sbin/:/usr/bin/:/sbin/:/bin/:/usr/local/games/:/usr/games/
[BOX64] Looking for /media/user/SSD256GB01/rpiOpenSpace/OpenSpace-0.20.1-1-x86_64.AppImage
[BOX64] Rename process to "OpenSpace-0.20.1-1-x86_64.AppImage"
[BOX64] Warning, CALL to 0x0 at 0x401efd (0x401efc)
This doesn't look like a squashfs image.
[BOX64] Warning: Unsupported Syscall 0x54h (84)

Cannot mount AppImage, please check your FUSE setup.
You might still be able to extract the contents of this AppImage 
if you run it with the --appimage-extract option. 
See https://github.com/AppImage/AppImageKit/wiki/FUSE 
for more information

permissions needed to rename themes and plugins wordpress folders

 One of our wordpress websites was not entering recovery mode. 

... email from Wordpress regarding a fatal error while doing a setting in the menu using Astra theme. We had encountered such an error earlier also but could recover the site through the recovery link. However today on clicking on the recovery link it says recovery not initialized. ...

So, we had to disable plugins in order to be able to access the site. To follow the SSH/SFTP method described in this page - https://kinsta.com/knowledgebase/disable-wordpress-plugins/

But ... files could be opened but cannot rename since it says permission denied

So, just adding the ssh user to the www-data group was not enough - 

sudo adduser  thewordpresswebsiteadminuser www-data

also had to chmod the parent directory to 775 - was 755 earlier. 

cd theparentdirectory 
chmod -R 775 *

Then renaming worked.

Wednesday, April 09, 2025

errors with OpenSpace AppImage on Ubuntu 24.04

 Tried running the OpenSpace AppImage on a fresh Ubuntu VM running atop Virtualbox.

Error - error while loading shared libraries: libjack.so.0: cannot open shared object file

sudo apt install libjack0

solved that issue.

But then, 

(D) CefHost              Remote WebBrowser debugging available on http://localhost:8088

[0408/141049.216978:FATAL:setuid_sandbox_host.cc(157)] The SUID sandbox helper binary was found, but is not configured correctly. Rather than run without sandboxing I'm aborting now. You need to make sure that /tmp/.mount_OpenSpHMdIhf/home/runner/source/OpenSpace/build/modules/webbrowser/ext/cef/cef_binary_102.0.10+gf249b2e+chromium-102.0.5005.115_linux64/Release/chrome-sandbox is owned by root and has mode 4755.

Trace/breakpoint trap (core dumped)

Tried running the AppImage as as root, could not connect to display - (should probably have tried pkexec as mentioned in a previous post, but instead, I tried to solve the appimage issue.)

Looks like this is due to apparmor - a bug report - https://askubuntu.com/questions/1512287/obsidian-appimage-the-suid-sandbox-helper-binary-was-found-but-is-not-configu

The --no-sandbox or --disable-setuid-sandbox methods did not work. But the apparmor method worked - 

Created a file /etc/apparmor.d/openspaceappimage with the following content:

# This profile allows everything and only exists to give the
# application a name instead of having the label "unconfined"

abi <abi/4.0>,

include <tunables/global>

profile openspaceappimage /path/to/OpenSpace-0.20.1-1-x86_64.AppImage flags=(default_allow) {

  userns,

  # Site-specific additions and overrides. See local/README for details.

  include if exists <local/openspaceappimage>

}

Rebooted the VM, then the appimage ran. But then, blank screen after loading etc.

The web-ui at http://localhost:4680 works.

OnlyEarth profile loads, but only at 0.2 fps. But that seems to be due to 3d acceleration not being enabled on the VM's display. For that, I suppose guest additions are needed? Anyway, shut down the VM, changed display RAM to 128 MB, enabled the 3d acceleration checkbox in Virtualbox VM settings, (I see that I had done so in an earlier post), booted up the VM, changed display resoltuion inside the VM by right-clicking on the desktop to 1920x1080, then ran OpenSpace, mostly OK.

Takes a minute or two to initialize GL, then load assets, till then black screen for OS.

The default display configuration doesn't seem to work - black screen. But gui portrait, single fisheye gui etc worked fine. With OnlyEarth profile - 100 fps average with occasional stuttering. 

Tried the default profile also later - it took around 2 hours to download all the 'sync' files - and ran very very slow, not usable.


Saturday, April 05, 2025

free AI movie generation - only 5 sec is usually free

 Trying out more resources for "free" AI generated movies - my earlier post about this is here - this video - Install WAN 2.1 The 100% FREE AI Video Generator : Very EASY & QUICK! - talks about installing 

https://pinokio.computer/
(browser which allows to install and run various AI tools)

WAN2.1 is the actual video generator

And can do online for free,

https://wan21.net/ai-video-generator

but sometimes the queue takes a long time. Afternoon 1:30 IST, started rendering this prompt "A moonlit night sky with scattered clouds" rendered in around a minute, with only less than a minute in the queue. The resulting video is below. 5 credits a month for the free account, I think. 


Another top result for "free" AI movie generation is this video, which talks about digen.ai - online vid generate and share platform - and using a temp mail to get credits!
download just after processing before watermark is applied in "optimizing" step!

At some point, I'll try installing WAN2.1 if possible.


upscaling and resizing with chainner

Chainner - A node-based image processing GUI - seems to have a relatively easy-to-use installer - deb files for Linux, Windows installer, etc. And it has a dependency manager which can install the required dependencies. Tried it out, but unfortunately, it could not use the external stable diffusion api - maybe because I didn't run the web ui with API enabled? or is it that the API has changed and hence it gives errors like "invalid HTTP request"? Tried running chainner after running easydiffusion - could not connect.

api wiki says latest info is at :7860/docs

http://127.0.0.1:7860/docs

Video - how to upscale images with chainner

Models from https://openmodeldb.info/

(old site was https://upscale.wiki/wiki/Main_Page#The_Model_Database )

Chose https://openmodeldb.info/models/4x-RealWebPhoto-v4-dat2

Took 10 minutes. Seems to be using cpu and not gpu

8x upscale with onnx took 24 minutes. Looks like water colour and not pointillist like the previous upscale with Stable Diffusion

8x upscale with onnx using the model 4x-RealWebPhoto-v4-dat2 compared with the original image before scaling down and upscaling. Click on the image to see it larger.

Probably there may be some models which would do fast upscaling with realistic results (unlike the compact model below which seems to be optimised for anime). 

Recommended models at

https://phhofm.github.io/upscale/favorites.html

Compact model - https://openmodeldb.info/models/2x-HFA2kCompact

For video, the youtube video above used video frame iterator, and the compact model. Now the interface has video input and video output nodes, which will automatically create video from the frames. Using the  compact model, the video upscaling and processing was completed in just a few seconds, for 250 frames Baba. Looks a bit like anime / cartoon / water colour. If the numbers under the chainner nodes are the times taken, then 0.04 sec per frame, 9.8 sec for encoding to vid.

Edit: There are lots of youtube tutorials for video upscaling, including ones using colab - https://www.bing.com/videos/search?q=stable+diffusion+video+upscaling

Thursday, April 03, 2025

checking out Easy diffusion

Tried out Easy Diffusion, https://github.com/easydiffusion/easydiffusion

checking to see if it is faster than Stable Diffusion webui - my previous post on that is here.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs#linux

Some useful links - basics of inpainting - https://github.com/easydiffusion/easydiffusion/wiki/Inpainting

https://allprompts.com/stable-diffusion-prompts-for-realistic-photos/

So we should add words like national geographic, landscape, photo, photography

Easy Diffusion - better interface, and a bit faster - 45 sec instead of 60 sec to generate a 512x512 image, and just 7 sec to upscale the generated image. But there doesn't seem to be a way to upscale our own images, at least not seen in the UI. Trying the same thing with CPU on laptop, the 45 second generation on the 1060 graphics card took nearly 12 minutes with CPU!

Some example prompts and resulting images:

1. moonlit landscape - 

2. photo of night sky with full moon tokyo skyline in silhoutte nikon national geographic



3. moonlit landscape, nikon, national geographic, full moon, trees (wish upscaling done 512 to 1024) - 





run unsigned binaries on MacOS

 I was not sure if unsigned binaries would run on MacOS, and whether one can override its behaviour. So, apparently MacOS 10.15 Sonoma onwards, unsigned binaries won't run by default, but we can specifically white-list any binary we want to run, and then it will run.

https://github.molgen.mpg.de/pages/bs/macOSnotes/mac/mac_procs_unsigned.html

https://discussions.apple.com/thread/254786001?sortBy=rank

Either via the GUI - 

* Try to open myFancyBinary in Finder. This will fail.

*Open System Preferences, choose the Security control panel, select the General tab.

*Look for the message: “myFancyBinary was blocked from opening because it is not from an identified developer.”

*Click the Open Anyway button to the right of the message.

or via the terminal - 

spctl --add /path/to/myBinary

And for getting the full path, and easy way is to drag and drop the binary from Finder into the Terminal window.


Stable Diffusion tests and benchmarks

I had tried out Stable Diffusion on Windows last year, now I wanted to try it out on Linux and if possible, using Docker. 

Unfortunately, the docker method failed. Searching for stable diffusion docker with web ui



talks about
(I had to run everything with sudo prefix.)

sudo docker compose --profile download up --build
roughly 12GB of data to be downloaded.

Unfortunately, after 2 hours of downloading, using the Automatic1111 profile,

sudo docker compose --profile auto up --build

Error response from daemon: could not select device driver "nvidia" with capabilities: [[compute utility]]

So, probably I need --profile auto-cpu.

But that gave an error,
File "/stable-diffusion-webui/webui.py", line 13, in <module>
etc
ImportError: cannot import name 'TypeIs' from 'typing_extensions' (/opt/conda/lib/python3.10/site-packages/typing_extensions.py)

Deleting the dir, cloning the repo and starting again - again downloading 12 GB, 

And again failed.
sudo docker compose --profile auto-cpu up --build

Mounted styles.csv
auto-cpu-1  | Mounted ui-config.json
auto-cpu-1  | mkdir: created directory '/data/config/auto/extensions'
auto-cpu-1  | Mounted extensions
auto-cpu-1  | Installing extension dependencies (if any)
auto-cpu-1  | Traceback (most recent call last):
auto-cpu-1  |   File "/stable-diffusion-webui/webui.py", line 13, in <module>
auto-cpu-1  |     initialize.imports()
auto-cpu-1  |   File "/stable-diffusion-webui/modules/initialize.py", line 23, in imports
auto-cpu-1  |     import gradio  # noqa: F401
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/gradio/__init__.py", line 3, in <module>
auto-cpu-1  |     import gradio.components as components
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/gradio/components/__init__.py", line 3, in <module>
auto-cpu-1  |     from gradio.components.bar_plot import BarPlot
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/gradio/components/bar_plot.py", line 7, in <module>
auto-cpu-1  |     import altair as alt
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/__init__.py", line 649, in <module>
auto-cpu-1  |     from altair.vegalite import *
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/vegalite/__init__.py", line 2, in <module>
auto-cpu-1  |     from .v5 import *
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/vegalite/v5/__init__.py", line 2, in <module>
auto-cpu-1  |     from altair.expr.core import datum
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/expr/__init__.py", line 11, in <module>
auto-cpu-1  |     from altair.expr.core import ConstExpression, FunctionExpression
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/expr/core.py", line 6, in <module>
auto-cpu-1  |     from altair.utils import SchemaBase
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/utils/__init__.py", line 14, in <module>
auto-cpu-1  |     from .plugin_registry import PluginRegistry
auto-cpu-1  |   File "/opt/conda/lib/python3.10/site-packages/altair/utils/plugin_registry.py", line 13, in <module>
auto-cpu-1  |     from typing_extensions import TypeIs
auto-cpu-1  | ImportError: cannot import name 'TypeIs' from 'typing_extensions' (/opt/conda/lib/python3.10/site-packages/typing_extensions.py)
auto-cpu-1 exited with code 1

Then I thought I would try chainner, https://chainner.app/download
gives a deb file, for ease of installation. It has a dependency manager also. 

But the dependency manager does not install stable diffusion! So, trying

(
current python version is 3.11.5 on this Linux mint based on ubuntu 24.04 - which may run into problems,
- we're supposed to run on python 3.10-venv
)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 3.94 GiB of which 31.50 MiB is free. Including non-PyTorch memory, this process has 3.73 GiB memory in use. Of the allocated memory 3.60 GiB is allocated by PyTorch, and 77.43 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF


Stable diffusion model failed to load

Can try lowvram command, since we have the NVidia 1060 -  

export COMMANDLINE_ARGS="--lowvram"
./webui.sh

Success. Does a 512x512 generate in ~ 1 minute, 4x ESRGAN upscale resize in ~30 sec.
this has a listing of how to make prompts.

My primary use case was for upscaling. I took a picture which I had clicked with the mobile camera in its "50 megapixel" mode, which shows quite a bit of "softening" when viewed at 100% - then cropped to 4096x4096, scaled down to 512x512 (1k images were causing out of VRAM errors) and then scaled up to 4096x4096. Interesting to see the results for a jpg show lots of ringing, while the results for a png seems to make things sharper. 

A portion of jpg input on the left, with output on the right, processed as 
scale down to 512 px, save as jpg, and then upscale to 4x - this shows ringing. Click on the image to view larger size.

A portion of jpg input on the left, with output on the right, processed as 
scale down to 512 px, save as png, and then upscale to 4096 - this shows an interesting pointillistic effect, and has made edges if the image sharper - clearly seen in the foreground trees.  Click on the image to view larger size.

I'll write a separate post on Easy Diffusion, since this post has already become quite long.

Tuesday, April 01, 2025

experiments with LLMs and docker

Tried out some "easy to use locally" Large Language Models (LLMs) with the motivation of being able to use them with our own documents

Looking for easy ways to install and run them, via Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

ollama list
ollama pull llama3.1:latest (~4 GB - the download went at nearly 4 MBps on the Airtel 40 Mbps connection, and around 6 MBps on Airtel 5G - 12 minutes)
ollama pull llama3.2:latest (~2 GB, and runs faster on low-end hardware)

(run as ollama run llama3.1:latest --verbose to get tokens/sec.
)

With 3.1, it was going at ~4 tokens per second on the desktop. CPUs, I think. With llama3.2, it was going at 23 tokens / sec - quite an acceptable speed. On the Windows laptop, running on CPU, llama3.2 at 9 tokens / sec.


Wanted to try the docker route - installed docker via https://docs.docker.com/engine/install/ubuntu/ by setting up docker's repository. All my docker commands needed to be prefaced by sudo, since I didn't go through the post-install steps.

Trying to run open-webui with the commandline given in the youtube video, 

sudo docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama 
has the steps.

sudo docker pull ghcr.io/open-webui/open-webui:main

For Nvidia GPU support, add --gpus all to the docker run command:
Single user mode - 

sudo docker run -d -p 3000:8080 --gpus all -e WEBUI_AUTH=False -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main

That also failed. "Unable to load models". Then, found that we need to start ollama separately in another terminal window, from How to Install Ollama, Docker, and Open WebUI on Windows - YouTube
and we need a different set of parameters in the command line if ollama is already installed on our system. So, got the openwebui to work with two modifications:
ran from administrator cmd (on Windows)
without -d (so that I can see the errors if any - do not detach)
and must start ollama first in another window with
ollama run llama3.2

 docker run  -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

But connecting stable diffusion to open-webui seems to be complicated - 

So, will run Stable Diffusion (for image generation and upscaling) from its own web ui instead.


creating ai agents - just links

 free plan is 1000 executions, 14 days.

via

Create Your FIRST AI Agent Today - youtube video

(uses paid n8n, paid chatgpt etc)

serpapi 100 searches per month in free plan - https://serpapi.com/pricing

n8n.io - can run locally also.

https://n8n.io/itops/ - using n8n - nodemation - to automate user creation etc. https://www.youtube.com/watch?v=ovlxledZfM4

https://blog.n8n.io/how-to-make-ai-chatbot/