Friday, April 19, 2024

Filezilla TLS errors

When using Filezilla on Linux Mint at work, connecting to an FTP server, after logging in, I get some errors like the following:


Command: LIST
Response: 150 Opening BINARY mode data connection.
Error: GnuTLS error -110: The TLS connection was non-properly terminated.

Some file transfers go through, while some get 550 errors - when I close Filezilla and open it again, the transfer succeeds.
 
Later, when using Windows and downloading using a domestic BSNL fiber line, there were no errors.
 
According to this set of posts on serverfault, it may have been due to either the firewall gateway blocking passive mode FTP or it may have been something to do with implicit/explicit TLS.

Wednesday, April 17, 2024

methods to download from Amazon S3 bucket (if we own/manage the bucket)

Some ways are mentioned at 

 
Cyberduck can provide a UI for Windows or Mac - "Cyberduck is a libre server and cloud storage browser for Mac and Windows with support for FTP, SFTP, WebDAV, Amazon S3, OpenStack Swift, Backblaze B2, Microsoft Azure & OneDrive, Google Drive and Dropbox."

Saturday, April 13, 2024

workflow_dispatch only from default branch

I like to set Github Actions to only run when manually triggered, that is, using workflow_dispatch

As that page says,  "To trigger the workflow_dispatch event, your workflow must be in the default branch."

But we can change the default branch - like we may want to do for forks etc - Repository --> Settings --> Default branch (choose by clicking the "choose another branch" button which has two opposing arrows as an icon)

comparing the contents of two directories on Windows

 How to Compare Two Folders on Windows 11 and 10 (howtogeek.com)

robocopy "C:\first" "C:\second" /L /NJH /NJS /NP /NS

Thursday, April 11, 2024

reminder to take a break - using cron

Wanted to create a pop-up message or reminder (and was later requested an audio reminder) for N to take periodic breaks from the computer. Since his usage was mostly passive, workrave did not work for this purpose - workrave's timer only progresses when the keyboard or mouse are active.

 Looking for solutions, reminder apps on Mint/Ubuntu seemed to have only flatpak versions which I did not want to install.

Finally thought it would be simpler to use cron.

But, to pop up messages on the screen with cron, we need some special handling.
https://unix.stackexchange.com/questions/307772/pop-up-a-message-on-the-gui-from-cron
https://unix.stackexchange.com/questions/111188/using-notify-send-with-cron

And for audio, we would need another line to be added to the script. Working script - 

#!/bin/bash
export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
export DISPLAY=:0
export XDG_RUNTIME_DIR=/run/user/1000
# this last line above is for sound to work
notify-send -u critical "this is the reminder to take a walk"
# the -u critical is supposed to show over full screen apps, but doesn't always work
/usr/bin/mpg123 /home/username/Downloads/Walk.mp3 > /dev/null 2>&1

For help with setting up cron, cron.help is useful. Eg.

*/30 * * * * /home/username/walkremind.sh

for every half an hour.

For getting the environment variables' values, used

printenv

And for installing notify-send

sudo apt install libnotify-bin






using a laptop as a secondary monitor using Miracast

Apparently Windows now has built-in support for Miracast, so we can use a Windows laptop as a secondary display for a Windows desktop, for example.

https://www.tomshardware.com/how-to/use-laptop-as-monitor-for-another-pc
that is if both run Windows.

Miracast client for Linux - miraclecast - to use as sink,
https://askubuntu.com/questions/1254998/how-do-i-launch-miraclecast-after-installation

https://alternativeto.net/software/miraclecast/
https://letsview.com/screen-mirroring
(but not on linux)

https://github.com/albfan/miraclecast

 

Wednesday, April 10, 2024

RAM usage - Edge and Firefox browsers on Linux Mint

I've installed 'Graphical Hardware monitor' to show RAM, Disk read and Disk write on the panel of the old Macbook Pro running Linux, which has only 4 GB of RAM including VRAM. Curious to see RAM usage, tried opening Gmail + GDrive + a spreadsheet on Firefox and Edge browsers separately. With Firefox, total RAM used was 1.92 GB. With Edge, it was 1.63 GB. So, Firefox was using around 300 MB more than Edge in this (not too rigourous) test.

image sequence support for OCVWarp

Prompted by this feature-request for command-line support, OCVWarp version 4.00 now has basic support for image sequences, too. The documentation in the wiki has been updated with caveats. And here is a link to all posts about OCVWarp.

PS. And in December, a test build with MacOS X succeeded on github actions using homebrew to install OpenCV.

Sunday, April 07, 2024

using blender online

As noted at https://digitalarthub.net/3d/how-to-use-blender-online-without-downloading/

"Do note that most of these apps run outdated versions of Blender and they’re pretty laggy",

Since Blender doesn't run any more on the old Macbook running Linux - https://www.blender.org/download/requirements/ - I can use the online versions for quick screenshots and so on.

 

Friday, April 05, 2024

Windows defender finding a Trojan in build from Github actions - false positive

Artifacts from some builds of OCVWarp were being flagged by Windows Defender as having a trojan, Wacatac. As mentioned in this page, https://github.com/RPCS3/rpcs3/issues/15309 this is probably a false alarm, as some other builds with minimal changes were not flagged. So, tried Add an exclusion to Windows Security - Microsoft Support - that worked well. 

Start --> Settings --> Privacy & security --> Virus & threat protection -->  Manage settings --> Exclusions --> Add or remove exclusions.

shifting a blogger blog to Google Analytics 4

There was an email from Google, "All Universal Analytics services cease to function starting July 1, 2024"

Googling "use analytics with blogger", found this support article - and found that this blog was not automatically transferred from the earlier Universal Analytics to Google Analytics 4. Copy-pasted the "Measurement ID" as mentioned in that support article, finding the "Measurement ID" via this article. Now let's wait for a couple of days and see if it works.

Thursday, April 04, 2024

CPU-based stable diffusion

In my previous post about Stable Diffusion and how I could not generate 4k fulldome images with it, I had mentioned running out of RAM as the reason it failed. I thought perhaps if there is a way to run it on CPU instead of GPU, I could use a more recent faster machine - and perhaps use virtual memory and run for a longer time - to generate 4096 x 4096 fulldome images. But trying out rupeshs/fastsdcpu: Fast stable diffusion on CPU - found that even if I do an img2img using a 4K fulldome image as the input, the output was only 1024 x 1024. 

There may be ways to run google colab or something like that, and generate 4K fulldome images - but that is something to be tested later. At first glance, it seems that even Dall-E is limited to 1024x1024, and people have to stitch generated images together to make bigger canvases, as mentioned in this reddit thread. And this thread gives a Stable Diffusion guide which uses upscaling - which has issues as mentioned in my previous post.

super-resolution experiments

I thought of doing some super-resolution experiments directly with ffmpeg or something similar for virtualdub or avisynth or avidemux, but this detailed post makes me pause - mostly not so much better than Lanczos, as they say - scale - How do the super resolution filters in FFmpeg work? - Video Production Stack Exchange

Tuesday, April 02, 2024

stable diffusion experiments

Now that I have access to a GPU, I tried out some AI-based image resizing, video upscaling and image generation, using stable diffusion as noted in a previous post. This post notes some of the pros and cons of the technology as it stands now, in April 2024.

1. There doesn't seem to be a way to incrementally correct images as with ChatGPT-generated code. We need to fine-tune our prompts if we need better results, and run the generation once again.

2. On this machine - 1050TX GPU, quite slow - stable diffusion with the webui takes around a minute to generate a 512x512 default image with an img2img promt and the default 20 steps.

3. Following this guide on fddb, my results with generation were not so great. Changing "little girl" to "businessman with briefcase" did not result in a briefcase in the 4-5 iterations I tried out. Additionally, scaling up the image showed that the skyscrapers were not realistic at all. Perhaps this can be fixed by generating in the higher quality instead of first generating in 512x512 and then scaling up - but I can't do that, since I run into 'CUDA out of memory' errors.. Edit - further experiments in this post seem to indicate 1024x1024 seems to be the upper limit for most models.

Example - part of the fddb example image, upscaled to 4k using Lanczos resizing,

 and using R-ESRGAN-4x+, we see the cartoonish quality,



4. Trying to upscale a video which had a series of stills - something like a slideshow - resulted in lots of image flicker in the upscaled video. The reason for the flicker is some horrendous hallucination, close-ups from a couple of frames shown below.
Original - 

Upscaled - 


and the next frame has the shading which causes the flicker,


Decided to use Lanczos instead of AI in that case. But single image upscaling can give good results, especially if the image is a generic image and not an exact likeness of someone. In case of an exact likeness, some hallucination of features is seen. Example close up, something like spectacles is seen on the bridge of the nose, not present in the original.





 

Monday, April 01, 2024

supported browser for old versions of Windows and MacOS

Copy-pasting from an email I sent:

There is a Chromium-based browser called Thorium which has support for Win7 and above.


(So it may allow use of Google Drive, Google Meet etc which nowadays say "not supported" on older versions of Chrome.)

(Via

A list of browsers for older Macs -

Wednesday, March 27, 2024

Unreal engine for fulldome creation

One of my colleagues passed on this tutorial on how to get started with Unreal Engine - Unreal Engine 5 Beginner Tutorial - UE5 Starter Course from Unreal Sensei

It looks like VR export is also possible relatively easily, but with some caveats as seen in the comments of the youtube video - Unreal Engine 5 360 Panoramic EASY! | No coding No Plugins

I'm not going into it right now, since I have too much on my plate already, and lots of pending content generation with translations, OpenSpace, Stellarium and available VR360 videos.

Tuesday, March 26, 2024

Action required: Migrate your .NET apps to the isolated worker model

There was an email notice from the Azure portal, Migrate your .NET apps in Azure Functions to the isolated worker model by 10 November 2026.

I wanted to check for .NET apps in the various subscriptions, so I followed the method given at the link in the email,

I also found the app under "App Services" in the relevant tenant - so there was no need to spin up a cloud shell.

Since we're currently not using this function (which automates starting the dev server every day, after it is shut down at night), I'll just delete this function instead of migrating it.

creating a simple (but large) family tree

Looking for free (of cost) solutions for making a simple (but large) family tree, I had installed Gramps, but found it less than friendly to use, at least for my purposes. Looking at free family tree templates in Excel and Google Docs, https://spreadsheetpoint.com/family-tree-template-google-docs/ - did not make it very quick to create a tree. Then, more searches turned up familyecho.com - very quick and easy to use, browser based, exports to html.

Monday, March 25, 2024

experiments enabled by NVIDIA graphics card

Now that I have access to a desktop computer with an NVIDIA graphics card, even though it's just a GTX1050 with 8 GB VRAM, I could try out several useful tools which generally need GPU support to run well.

  1. Openspace - I've contributed to the documentation with an FAQ on Planetarium usage.
  2. Running Sheepit autoupdater when I'm not using the machine for anything else, racking up points so that when I need to render something in Blender, I can cloud-render it faster.
  3. Upscaling some of our old videos - from VHS or miniDV - to 4K. Initial idea from Awesome Blender, then using the tutorial from nextdiffusion.ai, taking care to block images on that site (could be NSFW in many contexts).
  4. Will also check out creating some animations with text prompts, and perhaps even fulldome images with Stable Diffusion, since the webui makes it quite a bit more accessible.

Saturday, March 23, 2024

Moodle authentication via an external website

There was a query about how we can authenticate users on one of our LMS servers running Moodle, if they sign up and sign in using an external website. My reply was: 

Moodle supports the following authentication methods as given in the documentation link below:


If (the other website) can supply any of those methods, we can implement the plugin on Moodle.

Friday, March 22, 2024

push to Github failing on Google Apps Scripts

One of our Google Apps scripts sometimes fails to push to Github, with the error message "(filename)  does not match " - the quick fix is to just delete the relevant file(s) (filename) in the git repo, and run the script from the console - it would then successfully push, and retain the correct hash for the next push.

 Edit: link to a repo with example GAS code for pushing a file to a github repo - a json file to a github pages site in this case - https://github.com/hn-88/GAS-daily-upload-to-sched

Tuesday, March 19, 2024

Mirage3d show previews

Mirage3d has a lot of show previews and demo reels on Vimeo. Also, full-length flat previews of shows like Dinosaurs at Dusk. Could download using savevideo.me using the play.vimeo.com url, also available download buttons for some of their videos, as well as savefromnet extension.  

Sunday, March 17, 2024

Netlify serverless functions

Interesting option for fast low traffic api - Netlify serverless functions using the free plan  -  use node.js on the back-end. (Via https://ravisiyerblog.netlify.app/ - https://raviswdev.blogspot.com/2024/01/roadmap-to-learning-full-stack-web.html )

Intro to Serverless Functions | Netlify

Tutorial - https://www.netlify.com/blog/intro-to-serverless-functions/

I just forked this repo, deployed it to my account on Netlify, it started working. Can simply modify the code to suit my requirement, no need for me to install node etc. though of course, if I need to change the functionality and use more node packages, I would need to do all that.

And it's very fast compared to Google Apps script. 

Free usage comparison:

GAS - https://developers.google.com/apps-script/guides/services/quotas - per day and per hour quotas for some services, 6 minutes per execution.

Netlify free plan - https://www.netlify.com/pricing/ - 1M per month serverless function calls, 100 GB per month.

ffmpeg screen recording

Seeing that the OCVWarp post comparing it to ffmpeg was mentioned in February search performance results from google as the top growing page, I browsed ffmpeg.org to check how easy or difficult it would be to contribute a plugin to ffmpeg. Seems a bit complicated. Browsing further, found these instructions in the wiki for screen-recording video using ffmpeg. Quite useful information. And ffmpeg's wiki and bug-tracking runs on Trac.

Friday, March 15, 2024

Transition from Azure classic administrator roles to RBAC roles

There was an email from the Azure portal, asking us to transition from Azure classic administrator roles to RBAC roles for one of our subscriptions. 


As mentioned in this documentation,


I have added the co-administrators and the service administrators as seen in the classic administrator pane to "Owner" role for each of the subscriptions.

Wednesday, March 13, 2024

creating a live stream on youtube using the api and google apps script

There was a requirement for 
(a) streaming an http audio stream to youtube with ffmpeg
(b) automating the creation of the broadcast, making sure that each broadcast is less than 12 hours long so that youtube will archive the video

The method we followed is detailed in this github repo folder - 
(Work in progress as of now - the main concept works, but some refinements like setting the thumbnail, descriptive title, etc to be done.)

The first option was to implement using bash scripts using something like rwxrob/cmd-yt - but the Oauth using go did not work on the first try - go list -f '{{.Target}}' did not return anything. The steps I did are listed below.

apt install jq
tput # already installed
apt install pandoc
# install go with https://go.dev/doc/install
wget https://go.dev/dl/go1.22.1.linux-amd64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.22.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
# and added that line to /etc/profile

For using the auth-go package, need to compile and install it.
cd ~/auth-go/auth-go-main
go build
but
go list -f '{{.Target}}'
did not return anything.

Then thought of trying google apps script instead.
just need to enable it.

certbot did not auto-renew a domain - it had expired

Even manually trying to renew a domain's SSL certificate from LetsEncrypt, which had not been auto-renewed, did not work. 

Certbot failed to authenticate some domains (authenticator: apache). The Certificate Authority reported these problems:
  Domain: theconcerneddomain.ours
  Type:   unauthorized

Checking the DNS, found the relevant domain was pointing to a godaddy ip address instead of our ip address. Then, checked the whois record and found that the domain had expired. Alerted the concerned person, they renewed the domain, and a few hours later, certbot auto-renewed the SSL certificate, too.

Monday, March 11, 2024

server timing out for a large download - fix was server-side - nginx

Downloading a large file from noirlab.edu, only 4.5 GB had been downloaded before "server error" was reported. When I tried to resume the download, the server would not respond. (and the issue was not that the server was not reachable - if I started a fresh download in a different location, it would start - only the resume was not working.)

Unfortunately, with the noirlab link, we're unable to resume partial downloads with aria2c after 12 hours or so. The download also seems to stop after a few hours of downloading, with a server error message.  

I've copy-pasted relevant parts of a log file produced by aria2, showing that the server times out when the client requests either
(a) more than one chunk, or
(b) chunk near the end of the file

2024-02-15 22:09:56.738161 [DEBUG] [SocketCore.cc:993] Securely connected to noirlab.edu (54.200.108.162:443) with TLSv1.2
2024-02-15 22:09:56.738161 [INFO] [HttpConnection.cc:128] CUID#7 - Requesting:
GET /public/media/archives/videos/dome_4kmaster/big-astronomy.zip HTTP/1.1
User-Agent: aria2/1.36.0
Accept: */*,application/metalink4+xml,application/metalink+xml
Want-Digest: SHA-512;q=1, SHA-256;q=1, SHA;q=0.1

2024-02-15 22:09:57.336946 [DEBUG] [WinTLSSession.cc:435] WinTLS: Read request: 16384 buffered: 14200
2024-02-15 22:09:57.336946 [INFO] [HttpConnection.cc:163] CUID#7 - Response received:
HTTP/1.1 200 OK
Date: Thu, 15 Feb 2024 16:39:56 GMT
Content-Type: application/zip
Content-Length: 262602447953
Connection: keep-alive
Server: nginx
Last-Modified: Wed, 01 Nov 2023 21:27:23 GMT
ETag: "6542c2bb-3d24535c51"
Access-Control-Allow-Origin: *
X-Cache-Status: BYPASS
Accept-Ranges: bytes

(Lots of lines with CUID#2 to 12 and so on, omitted here,...)

2024-02-15 22:11:09.882632 [DEBUG] [AbstractCommand.cc:181] CUID#13 - socket: read:0, write:0, hup:0, err:0

2024-02-15 22:11:10.885437 [DEBUG] [AbstractCommand.cc:181] CUID#13 - socket: read:0, write:0, hup:0, err:0

(Lots more lines with socket read:0 etc, finally ... )

2024-02-15 22:12:04.827538 [DEBUG] [AbstractCommand.cc:325] CUID#13 - Marking IP address 54.149.252.235 as bad
2024-02-15 22:12:04.827538 [INFO] [AbstractCommand.cc:366] CUID#13 - Restarting the download. URI=https://noirlab.edu/public/media/archives/videos/dome_4kmaster/big-astronomy.zip
  -> [AbstractCommand.cc:340] errorCode=2 Timeout.

Reported this to the people at the website, and they made an ip-address-only link available for me to try. That worked, with the resume functionality working even after many hours.

[#b2e54e 64GiB/244GiB(26%) CN:15 DL:1.0MiB ETA:50h51m51s]

Thursday, February 29, 2024

procedure to change state - Indian Driving License

The procedure in this Malayalam video is for changing the state to Kerala, but I suppose a similar procedure would work for changing to another state which is available on the Parivahan Sarathi website.

Notes: Can do renewal and address change in one step.

Parivahan website

Online services -> driving license related

Choose destination state

driving license -> Service on DL (Renewal/Duplicate...)

must be 15 characters when entering DL number - add zeros after year if not.

If doing with renewal, Don't use this!
 
Apply for renewal in the first page.

for renewal.

No need for attestation of documents
just must be smaller than 500 kB

There's also an application status link on the Parivahan Sarathi service.
2-3 days for approval.
~ 1 week for printing and dispatch. (This is for Kerala RTOs).

Wednesday, February 28, 2024

free satellite imagery from Sentinel and other sources

Nowadays we have grown used to Google Maps having high resolution satellite images. But they are copyrighted images and the Google credits need to be shown onscreen if we use it in a planetarium show or something like that. Looking at OpenSpace and its data sources, googled for some free sources of hi-res satellite images - 

for quick and easy temporal images.

for slightly lower resolution, but with older data also

Sentinel images for hi-res
Full global coverage since March 2017. 10, 20 & 60 metre resolutions.

World Imagery Wayback - easy to navigate like google maps, but with ESRI terms of use - allows noncommercial use.
and ESRI World Imagery - up to 3 cm resolution! For free with ESRI terms of use - allows noncommercial use.



changing the storage limit on onedrive for a particular user

Yesterday, I had looked at
portal.azure.com and the Users link there.
In that interface, it only allowed adding of licenses, I did not see the screen for quota changes.

Today, I tried going to admin.microsoft.com
There, I get the option to edit the storage limit for onedrive.
But still, we can only limit it to less than one 1TB, we cannot increase it to more than 1 TB (as per the license being used by that user).

Sunday, February 25, 2024

OCVWarp with frameserving

I tried to incorporate frame-serving via avisynth+ to my OCVWarp workflow described here, so that steps 1 and 2 could be conflated and the rendering time could be halved - most of the rendering time is taken up by the CPU usage of the codec, I believe. But it did not work.

The latest avisynth+ files work with the latest Virtualdub - even 4096x4096 videos are rendered with no problems. But creating a frameserver with Virtualdub and pointing OCVWarp to the frameserver causes OCVWarp to crash/not respond. AVFS also did not help.

append ConvertToRGB24()

But not responding even in vdub.

(
to get avfs to work, had to run 
pfm-192-vapoursynth-win.exe
in an administrator cmd 
)

Friday, February 23, 2024

ffmpeg recipes for image slideshows

 Just to bookmark this, I've not tried these out yet - https://stackoverflow.com/questions/24961127/how-to-create-a-video-from-images-with-ffmpeg - where there is a long answer with resizing options, adding audio, etc.

api js formatting - make it readable in Notepad++

Since javascript code obtained from servers is minified and on a single line, it can be difficult to read.

As the top post in this link says, Is it possible to indent JavaScript code in Notepad++? - Stack Overflow

  1. Select menu Plugins ->Plugin Manager ->Show Plugin Manager
  2. Check JSTool checkbox, install and restart Notepad++
  3. Open js file and then go to menu -> Plugins -> JSTool -> JSFormat

Tuesday, February 20, 2024

emails from microsoft365 bouncing, showing "no such user"

One of our domains runs their emails on Microsoft365/Outlook, and they were unable to send emails to some other domains they own. Some messages would bounce saying "no such user". It turned out that the recipient domain had been added to Microsoft365 as an additional domain, but no users were added under that domain. Also, that domain was actively being used, with MX records pointing to Google Workspace. So, Outlook or Microsoft365 was not looking at the MX records, but instead looking at the internal users with active directory, and deciding that there was "no such user". 

The solution was to remove that recipient domain from Custom domain names - Microsoft Azure

Any other users who had been added to the Microsoft active directory would need to check their inbox on Outlook and not on Google Workspace for any emails sent from our domain on Outlook. This would be relevant for emails being sent from Onedrive, Microsoft Teams, etc. 

Friday, February 16, 2024

DKIM and SPF for a domain using Microsoft Outlook as email provider

One of our domain administrators got an email from Microsoft, "Ensure your email authentication records are set up to avoid mail flow issues to third-party email accounts." Apparently, DKIM was not set up properly.

The page at
Email authentication settings - Microsoft Defender
says,

|Microsoft.Exchange.Management.Tasks.ValidationException|CNAME record does not exist for this config. Please publish the following two CNAME records first.
Domain Name : ourdomain.org
Host Name : selector1._domainkey
Points to address or value: selector1-ourdomain-org._domainkey.ourdomain.onmicrosoft.com

(This is tested and working fine with dig.)


Host Name : selector2._domainkey
Points to address or value: selector2-ourdomain-org._domainkey.ourdomain.onmicrosoft.com

(This record returns SOA. I thought there was something wrong with onmicrosoft.com DNS settings. But probably this record is used only when the key needs to be rotated.) 

I set the DKIM CNAME records as per the email.

The page
Email authentication settings - Microsoft Defender
says sync will take a few minutes to several days. 

After 90 minutes or so, I checked and found DKIM successfully enabled at that page. The page said that it would take several minutes to roll out the changes, so that DKIM signing would commence after a few minutes/hours.

This page
Set up SPF identify valid email sources for your Microsoft 365 domain | Microsoft Learn
says that SPF is already set. And I verified that it is set properly using dig. So that should solve this issue.


Thursday, February 15, 2024

Uploading of Specific Fonts on WordPress with Astra Theme

Copy-pasting from an email exchange:

we were trying to upload special fonts for headings and subheadings in the home page and we could not do it.

Can you pls check if it is related to any file permission issue.
 
Are you trying to install those fonts locally or use them by linking to Google's servers? 

If you are trying to install them locally, you probably may not have the required permissions.

I've now downloaded those fonts and installed them on the wp server locally. Maybe wpastra can pick them up from there. Please check.
 
(When they said it still did not work, I responded as follows:)
 
1. Google fonts can be linked in web pages in such a way that the users of the web pages get the fonts from google and display it on their browsers. If this has to work properly, there are several things which have to be taken care of. Example:

2. Another way in which fonts can be embedded in web pages is using fonts on your web server, like this,

3. Another way in which fonts (any font) can be used is to just define the font, without embedding it - in this case, the font will fail to load if the same font is not present in the user's machine.

Without knowing what you are trying, whether the situation is 1, 2 or 3, I cannot help you further.

If you cannot tell me whether it is 1, 2 or 3, I would need access to a test page where the issue is seen.
 
(And they confirmed that it is working:)
The font issue is resolved, and we can choose the font type from the font family. We found out that we were missing one step for changing the font.


60 fps to 30 fps with ffmpeg without reencoding


ffmpeg -itsscale 2.0 -i input-60fps.mp4 -vcodec copy output-30fps.mp4

(Use the "copy" video codec, and scale the input timestamps using the floating point number 2.0)

Tuesday, February 13, 2024

creating stellarium flybys

As mentioned in September last year, Stellarium now has some features which enable creation of flybys. Perhaps Saturn is the most spectacular? Tried out creating two fulldome versions suitable for planetariums, one with Saturn visible at the apex of the dome, and another with Saturn near the front side of the dome, suitable for planetariums with unidirectional seating. 

Code is at https://github.com/hn-88/stellarium-scripts

 




More flybys available via stephanpls at https://github.com/Stellarium/stellarium/discussions/2178



Monday, February 12, 2024

adding SPF record for one of our servers

One of our servers which used to send a few notification emails to us via postfix stopped doing so - status=bounced (host gmail-smtp-in.l.google.com[142.251.2.27] said: 550-5.7.26 This mail has been blocked because the sender is unauthenticated. 550-5.7.26 Gmail requires all senders to authenticate with either SPF or DKIM.

So, set up an SPF record in cloudflare, and all is well again.

optimizing kodi for planetarium projection

By default, Kodi on LibreELEC running on Raspberry Pi 4 was detecting our Optoma ZK 507 projector and setting the output resolution to 4096x2160 @ 30 Hz. This was all right, but our 3840x2160 and 1920x1080 shows were showing a black bar at the bottom. Checking for solutions - not the minimise black bars global setting, but instead, setting the resolution from the comments on this thread, 

https://discourse.osmc.tv/t/howto-guide-to-the-kodi-whitelist-function-and-related-settings/83919

I did not need to make a whitelist or blacklist in Kodi (blacklist is for devices, whitelist is for allowed video modes) - just setting the resolution at Settings --> System --> Display to 3180x2160 @ 30 Hz did the trick. Now, no black bars at the bottom of the FHD or UHD videos. 


Another issue was, hiss in the sound output via a generic USB sound output, when amplified by our sound system. We set up a limiter and a gate with our old Behringer MDX 2200 compressor. Then, for further sweetening, I got a second-hand Lexicon Alpha sound device via bajaao.com for approximately one day's salary, and used that for sound output - no more hiss.

For recognizing the Lexicon Alpha, we had to plug it in after the Raspberry Pi booted into LibreELEC and Kodi - on every reboot. After recognizing the device, a one-time configuration change in Kodi --> Settings --> Audio settings --> Audio device , and set the output device to Lexicon Analog (not Lexicon SPDIF in this case).

Trying a direct connection for the Lexicon Alpha onto the RPi (as against a connection via a USB hub) also did not resolve the issue of not recognizing it on boot. So, we're plugging it in after every reboot (on the USB hub for convenience.) 

Also available -
https://kodi.wiki/view/Settings/System/Audio#Maintain_original_volume_on_downmix
(not directly useful for us, since we're not downmixing all our files)
 

https://kodi.wiki/view/Settings/System/Audio#Volume_control_steps
(set it to the maximum of 50)

And, compression also seems to be available, via specific video's context menu
https://forum.kodi.tv/showthread.php?tid=196313
https://kodi.wiki/index.php?title=Video_playback#Audio_Settings

If this could be a global setting, we could perhaps avoid the hardware compressor, too.

Friday, February 09, 2024

huggingface and making your own customized chatbot for free

Huggingface released a free method to build customized AI chatbots

https://venturebeat.com/ai/hugging-face-launches-open-source-ai-assistant-maker-to-rival-openais-custom-gpts/

That link also discusses some pros and cons - like no web lookups for huggingchat. So, I looked up how we can make our own chatbot, the full detailed step by step process, how much manual work is involved, and what we need to customize it.


(seems to be generated by a chatbot? Not very useful at all.)

(this is good. Need to input data as csv - query and response examples)

(this seems to be the generic steps without customization)

So, link (2.) seems to indicate that we need to train it using a csv file with sample queries and responses. This is something which, for example, a large number of volunteers could generate. If each one generates questions and answers based on one Discourse, for example, 200 or 2000 people could generate quite a good data set.


Thursday, February 01, 2024

IPV6 readiness

A blog post about AWS charging for IPV4 addresses -

We could use AAAA records (IPV6) instead of A records (IPV4) to avoid these charges, since Cloudflare would proxy to IPV4 and IPV6. In case we need to SSH into the servers, and our ISP / router does not support IPV6, then we might need something like this,
 
A and AAAA are auto generated for cloudflare proxied servers.

But for ssh, we will need to use cloudflare tunnels.
Connect private networks · Cloudflare Zero Trust docs 
 
 
According to https://aws.amazon.com/blogs/networking-and-content-delivery/dual-stack-ipv6-architectures-for-aws-and-hybrid-networks/:

    1. Go to your VPC settings and assign a /56 prefix.
    2. Go to subnet settings within the VPC and assign a /64 to the subnet.
    3. Go to the instance's network interface settings and enable IPv6 address assignment. 

Full howto:
 
How to connect to ssh without ipv6, from Amazon console instead of Cloudflare -
(could not try out since my account doesn't have the required permissions.)
 
On Azure it seems to be fairly straightforward to add IPV6
 
Created an ssh tunnel with

Then connect with warp using
 
Probably we don't need warp, only need to make sure the tunnel is to 2222 and not 22.
checking config files as per
change port at ~/.cloudflared/config.yml

But no, we don't have those config files because we created from dashboard - 

Example config at
which is live at
 
There, they talk of a browser rendered terminal.
 

Logged in success with browser-rendered terminal on Edge browser on Windows 11. 
 
Possible pain point - Note is that for each login (with a timeout of 24 hours as the default), we have to copy-paste the private key, in PEM format, into the authentication form. PEM format for the private key is a must. OpenSSH format doesn't work - Browser SSH Private Keys not working - Zero Trust / Access - Cloudflare Community.
 
 
 
 
 
 

Monday, January 29, 2024

increasing file upload size in Wordpress

On one of our servers running Ubuntu 22.04, Wordpress reported upload size is set to 2 MB. Changed to 50M from 2M in the global php settings at
/etc/php*/8.1/apache2/php.ini - that has fixed the issue.

Sunday, January 28, 2024

email configuration changes on server - app password and <<< for mail command

With google phasing out support for "less secure apps", we had to use "App password" for one of our servers. On PB's terminal on Mac, Ctrl+D was mapped to something else, so we could not use the usual method of the mail command for testing where we end the body with Ctrl+D. Instead, tested by the three less-than symbol to send email, like

mail -s "<Subject>" recipient@something.com <<< "<Mail Body>"

Google phasing out password-based SMTP / POP3 / IMAP

We got an email with "[Action Required]", with a link to this blog post from google - https://workspaceupdates.googleblog.com/2023/09/winding-down-google-sync-and-less-secure-apps-support.html

We have used two different methods to deal with this - one solution using XOAuth, and one using App Passwords.

1. For some of our Moodle instances which used Google Workspace emails for outbound emails, we could use the in-built XOAuth support as explained here,

Admins can enable Gmail XOauth2 for outgoing and incoming mail

So I created two separate sets of credentials for one of the email ids, since the redirect url for each (one production and one development) server would be different as mentioned in the documentation linked from the tracker link above -

server.url/admin/oauth2callback.php

2. For our internal server using ssmtp as mentioned in this earlier post, this procedure (or the use of PHPMailer which supports XOAuth) would not be suitable, since it does not have a public-facing website url.

(
ssmtp seems to be orphaned since 2019, 

msmtp doesn't seem to have oauth,

describes a python solution, but needs refresh, and needs a browser.

)

In passing, the email from google mentions that App Passwords are not going away - "Password-based access (with the exception of App Passwords) will no longer be supported" . We had earlier created App Passwords for use with We can create App Passwords if we enable 2FA. So, we enabled 2FA, created an App Password, and just replaced the password in our earlier ssmtp configuration file with the App Password (without spaces). And that works.

 

Saturday, January 27, 2024

Firebase notification and action taken

 Firebase sent us an email with the information given in this link about email enumeration protection,

https://cloud.google.com/identity-platform/docs/admin/email-enumeration-protection

So, enabled the protection for the VV app's projects, following the procedure given in the bottom part of the link above.

Thursday, January 25, 2024

Merriam-Webster dictionary api and moodle plugin

This moodle plugin "Dictionary" requires a one-time free registration to the Merriam-Webster api at https://dictionaryapi.com/ - they currently provide 1000 free lookups per day per key, and 2 free keys per registration. The plugin uses one key for dictionary, one for thesaurus.

Tuesday, January 23, 2024

spam via Moodle support form, and ways to prevent it

For the first time, I got some spam apparently sent via one of our Moodle servers. This forum thread describes the issue and the solution - 

Moodle in English: How to remove link to Contact site support to avoid SPAM | Moodle.org

Instead of commenting out, used the method mentioned lower down in the thread for Moodle 4.1 and above, by going to Site administration --> Server --> Support contact and changing the option to make it available only for logged-in users. Did the same thing for two of our production instances, too.

 

Friday, January 19, 2024

tech support options for Moodle

Copy-pasting from an email exchange - 

(Deleting the part about my availability)

Paid tech support by email/telephone - Nettigritty.com 
We have found that they usually respond within 3-4 hours.
But a caveat - they would only extend support for those services for which you have paid them. For example, if you have paid for web hosting, they will support you only in web hosting related issues. They may not be able to help you with open-ended questions or how-do-i-do-this-in-moodle type of questions, though, of course, you could ask them if they will do so.

Moodle tech support (paid) is available,
but I've not tried them.

Moodle documentation is available via 
and the forums,
Moodle in English: Forums | Moodle.org

Thursday, January 18, 2024

SheepIt render farm - benchmarks and mini-howto for rendering fluids/particle systems - Blender 3.0 to 3.6

As mentioned in my previous post, SheepIt render farm is a free render farm for Blender Cycles and Eevee engines, based on mutual co-operation - we connect our machines to the render farm to earn points, and can get our projects rendered in double-quick time in return. 

Their FAQ page lists answers to most questions one may have about the service. I've uploaded an example of a project which includes the baked cache for particle systems, on Github, here. As mentioned in their FAQ, the steps to do this are:
1. Make all paths relative - in Blender, choose 
File --> External Data --> Make All Paths Relative

2. If you need to render a baked fluid/particle system, zip the blender file and the cache directory, ensuring that the cache directory is also set as a relative path. In my case, I had set the cache directory to be inside the project directory to simplify matters. In the project above, this is located in the Physics tab of the Smoke Domain object as seen in the screenshot below. As you can see, the current directory is indicated by a leading double-slash - //.

3. One point to note is that choosing File --> Save As and saving the project to another directory, and then copying the cache directory there, does not work (as in Blender 3.2 on Windows.) This is probably because the file paths get mangled? Anyway, if you want to break up the project into multiple renders due to the bake files being too large, copy-paste the project outside Blender instead, don't use File --> Save As. 

4. There is a script available on the Forums for splitting the project, in case you want to automate the process. Or, like I did, you can just choose to bake all, and then copy only the .vdb files in the data subfolder of the cache for a limited number of frames. For the zoomed out view, for frames after 275, this was something like 30-50 frames per project. And choose to render only those frames. Then create a fresh zip file for the next set of frames. And so on.

Here are some project statistics, for the zoomed in project render, which was taking around 3 minutes per frame on my machine:

Summary

  • Storage used: 971.8 MB
  • Cumulated time of render: 9h01m
  • Points spent: 5,767 points
  • Real duration of render: 1h23m
  • On reference per frame rendertime: 0m51s
On my machine, the 600 frames would have taken 30 hours to render, and even on the Studio Mac it would have taken 5-6 hours, but SheepIt finished it in less than 90 minutes due to the power of distributed processing. 

So, nowadays I leave the sheepit client running whenever I switch on my machine, to accumulate points. A borrowed CPU from VV fitted with the GTX1050 Ti card which I purchased for my research also helps at the office. The 1050 Ti card is only 4% the speed of the standard GPU, but since most of the projects specify GPU, it usually gets some project to render, as against the i5-1235U laptop CPU, which often goes many minutes with "No jobs to render".

Statistics of the three client machines:

CPUIntel(R) Core(TM) i7 CPU M 620 @ 2.67GHz x4
RAM allowed1.3 GB
RAM available3.9 GB
Max render time per frame
Power CPU6 %

Schedulervery_slow_computer

-------------------------------------------------------------------------------------------------------------------------

CPUIntel(R) Core(TM) i5-4430 CPU @ 3.00GHz x4
RAM allowed4.8 GB
RAM available8.3 GB
Max render time per frame
GPUGeForce GTX 1050 Ti
VRAM4.3 GB
Compute deviceCPU, GPU
Power CPU34 %
Power GPU4 %
SchedulerDefault

-------------------------------------------------------------------------------------------------------------------------

CPU12th Gen Intel(R) Core(TM) i5-1235U x12
RAM allowed2.9 GB
RAM available8.1 GB
Max render time per frame1h
Power CPU72 %
SchedulerDefault
 

And finally, here's a youtube video of the rendered output. This is a "Fulldome" video, meant for projection in a planetarium. (Since this is my first effort, the launch needs more work to look more realistic - currently the motion of the rocket looks a bit cartoonish.)