When using Filezilla on Linux Mint at work, connecting to an FTP server, after logging in, I get some errors like the following:
Command: LIST Response: 150 Opening BINARY mode data connection. Error: GnuTLS error -110: The TLS connection was non-properly terminated.
Some file transfers go through, while some get 550 errors - when I close Filezilla and open it again, the transfer succeeds.
Later, when using Windows and downloading using a domestic BSNL fiber line, there were no errors.
According to this set of posts on serverfault, it may have been due to either the firewall gateway blocking passive mode FTP or it may have been something to do with implicit/explicit TLS.
Cyberduck can provide a UI for Windows or Mac - "Cyberduck is a libre server and cloud storage browser for Mac and Windows with support for FTP, SFTP, WebDAV, Amazon S3, OpenStack Swift, Backblaze B2, Microsoft Azure & OneDrive, Google Drive and Dropbox."
I like to set Github Actions to only run when manually triggered, that is, using workflow_dispatch
As that page says, "To trigger the workflow_dispatch event, your workflow must be in the default branch."
But we can change the default branch - like we may want to do for forks etc - Repository --> Settings --> Default branch (choose by clicking the "choose another branch" button which has two opposing arrows as an icon)
Wanted to create a pop-up message or reminder (and was later requested an audio reminder) for N to take periodic breaks from the computer. Since his usage was mostly passive, workrave did not work for this purpose - workrave's timer only progresses when the keyboard or mouse are active.
Looking for solutions, reminder apps on Mint/Ubuntu seemed to have only flatpak versions which I did not want to install.
Finally thought it would be simpler to use cron.
But, to pop up messages on the screen with cron, we need some special handling. https://unix.stackexchange.com/questions/307772/pop-up-a-message-on-the-gui-from-cron https://unix.stackexchange.com/questions/111188/using-notify-send-with-cron
And for audio, we would need another line to be added to the script. Working script -
#!/bin/bash export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus export DISPLAY=:0 export XDG_RUNTIME_DIR=/run/user/1000 # this last line above is for sound to work notify-send -u critical "this is the reminder to take a walk" # the -u critical is supposed to show over full screen apps, but doesn't always work /usr/bin/mpg123 /home/username/Downloads/Walk.mp3 > /dev/null 2>&1
For help with setting up cron, cron.help is useful. Eg.
*/30 * * * * /home/username/walkremind.sh
for every half an hour.
For getting the environment variables' values, used
I've installed 'Graphical Hardware monitor' to show RAM, Disk read and Disk write on the panel of the old Macbook Pro running Linux, which has only 4 GB of RAM including VRAM. Curious to see RAM usage, tried opening Gmail + GDrive + a spreadsheet on Firefox and Edge browsers separately. With Firefox, total RAM used was 1.92 GB. With Edge, it was 1.63 GB. So, Firefox was using around 300 MB more than Edge in this (not too rigourous) test.
There was an email from Google, "All Universal Analytics services cease to function starting July 1, 2024"
Googling "use analytics with blogger", found this support article - and found that this blog was not automatically transferred from the earlier Universal Analytics to Google Analytics 4. Copy-pasted the "Measurement ID" as mentioned in that support article, finding the "Measurement ID" via this article. Now let's wait for a couple of days and see if it works.
In my previous post about Stable Diffusion and how I could not generate 4k fulldome images with it, I had mentioned running out of RAM as the reason it failed. I thought perhaps if there is a way to run it on CPU instead of GPU, I could use a more recent faster machine - and perhaps use virtual memory and run for a longer time - to generate 4096 x 4096 fulldome images. But trying out rupeshs/fastsdcpu: Fast stable diffusion on CPU - found that even if I do an img2img using a 4K fulldome image as the input, the output was only 1024 x 1024.
There may be ways to run google colab or something like that, and generate 4K fulldome images - but that is something to be tested later. At first glance, it seems that even Dall-E is limited to 1024x1024, and people have to stitch generated images together to make bigger canvases, as mentioned in this reddit thread. And this thread gives a Stable Diffusion guide which uses upscaling - which has issues as mentioned in my previous post.
Now that I have access to a GPU, I tried out some AI-based image resizing, video upscaling and image generation, using stable diffusion as noted in a previous post. This post notes some of the pros and cons of the technology as it stands now, in April 2024.
1. There doesn't seem to be a way to incrementally correct images as with ChatGPT-generated code. We need to fine-tune our prompts if we need better results, and run the generation once again.
2. On this machine - 1050TX GPU, quite slow - stable diffusion with the webui takes around a minute to generate a 512x512 default image with an img2img promt and the default 20 steps.
3. Following this guide on fddb, my results with generation were not so great. Changing "little girl" to "businessman with briefcase" did not result in a briefcase in the 4-5 iterations I tried out. Additionally, scaling up the image showed that the skyscrapers were not realistic at all. Perhaps this can be fixed by generating in the higher quality instead of first generating in 512x512 and then scaling up - but I can't do that, since I run into 'CUDA out of memory' errors.. Edit - further experiments in this post seem to indicate 1024x1024 seems to be the upper limit for most models.
Example - part of the fddb example image, upscaled to 4k using Lanczos resizing,
and using R-ESRGAN-4x+, we see the cartoonish quality,
4. Trying to upscale a video which had a series of stills - something like a slideshow - resulted in lots of image flicker in the upscaled video. The reason for the flicker is some horrendous hallucination, close-ups from a couple of frames shown below.
Original -
Upscaled -
and the next frame has the shading which causes the flicker,
Decided to use Lanczos instead of AI in that case. But single image upscaling can give good results, especially if the image is a generic image and not an exact likeness of someone. In case of an exact likeness, some hallucination of features is seen. Example close up, something like spectacles is seen on the bridge of the nose, not present in the original.
I'm not going into it right now, since I have too much on my plate already, and lots of pending content generation with translations, OpenSpace, Stellarium and available VR360 videos.
I also found the app under "App Services" in the relevant tenant - so there was no need to spin up a cloud shell.
Since we're currently not using this function (which automates starting the dev server every day, after it is shut down at night), I'll just delete this function instead of migrating it.
Looking for free (of cost) solutions for making a simple (but large) family tree, I had installed Gramps, but found it less than friendly to use, at least for my purposes. Looking at free family tree templates in Excel and Google Docs, https://spreadsheetpoint.com/family-tree-template-google-docs/ - did not make it very quick to create a tree. Then, more searches turned up familyecho.com - very quick and easy to use, browser based, exports to html.
Now that I have access to a desktop computer with an NVIDIA graphics card, even though it's just a GTX1050 with 8 GB VRAM, I could try out several useful tools which generally need GPU support to run well.
Running Sheepit autoupdater when I'm not using the machine for anything else, racking up points so that when I need to render something in Blender, I can cloud-render it faster.
Upscaling some of our old videos - from VHS or miniDV - to 4K. Initial idea from Awesome Blender, then using the tutorial from nextdiffusion.ai, taking care to block images on that site (could be NSFW in many contexts).
Will also check out creating some animations with text prompts, and perhaps even fulldome images with Stable Diffusion, since the webui makes it quite a bit more accessible.
There was a query about how we can authenticate users on one of our LMS servers running Moodle, if they sign up and sign in using an external website. My reply was:
Moodle supports the following authentication methods as given in the documentation link below:
One of our Google Apps scripts sometimes fails to push to Github, with the error message "(filename) does not match " - the quick fix is to just delete the relevant file(s) (filename) in the git repo, and run the script from the console - it would then successfully push, and retain the correct hash for the next push.
Edit: link to a repo with example GAS code for pushing a file to a github repo - a json file to a github pages site in this case - https://github.com/hn-88/GAS-daily-upload-to-sched
Mirage3d has a lot of show previews and demo reels on Vimeo. Also, full-length flat previews of shows like Dinosaurs at Dusk. Could download using savevideo.me using the play.vimeo.com url, also available download buttons for some of their videos, as well as savefromnet extension.
I just forked this repo, deployed it to my account on Netlify, it started working. Can simply modify the code to suit my requirement, no need for me to install node etc. though of course, if I need to change the functionality and use more node packages, I would need to do all that.
And it's very fast compared to Google Apps script.
Seeing that the OCVWarp post comparing it to ffmpeg was mentioned in February search performance results from google as the top growing page, I browsed ffmpeg.org to check how easy or difficult it would be to contribute a plugin to ffmpeg. Seems a bit complicated. Browsing further, found these instructions in the wiki for screen-recording video using ffmpeg. Quite useful information. And ffmpeg's wiki and bug-tracking runs on Trac.
I
have added the co-administrators and the service administrators as seen
in the classic administrator pane to "Owner" role for each of the
subscriptions.
(Work in progress as of now - the main concept works, but some refinements like setting the thumbnail, descriptive title, etc to be done.)
The first option was to implement using bash scripts using something like rwxrob/cmd-yt - but the Oauth using go did not work on the first try - go list -f '{{.Target}}' did not return anything. The steps I did are listed below.
Even manually trying to renew a domain's SSL certificate from LetsEncrypt, which had not been auto-renewed, did not work.
Certbot failed to authenticate some domains (authenticator: apache). The Certificate Authority reported these problems: Domain: theconcerneddomain.ours Type: unauthorized
Checking the DNS, found the relevant domain was pointing to a godaddy ip address instead of our ip address. Then, checked the whois record and found that the domain had expired. Alerted the concerned person, they renewed the domain, and a few hours later, certbot auto-renewed the SSL certificate, too.
Downloading a large file from noirlab.edu, only 4.5 GB had been downloaded before "server error" was reported. When
I tried to resume the download, the server would not respond. (and the
issue was not that the server was not reachable - if I started a fresh
download in a different location, it would start - only the resume was
not working.)
Unfortunately, with the noirlab link, we're unable to resume partial
downloads with aria2c after 12 hours or so. The download also seems to
stop after a few hours of downloading, with a server error message.
I've copy-pasted relevant parts of a log file produced by aria2,
showing that the server times out when the client requests either
(a) more than one chunk, or
(b) chunk near the end of the file
2024-02-15 22:09:56.738161 [DEBUG] [SocketCore.cc:993] Securely connected to noirlab.edu (54.200.108.162:443) with TLSv1.2
Reported this to the people at the website, and they made an ip-address-only link available for me to try. That worked, with the resume functionality working even after many hours.
The procedure in this Malayalam video is for changing the state to Kerala, but I suppose a similar procedure would work for changing to another state which is available on the Parivahan Sarathi website.
Notes: Can do renewal and address change in one step.
Parivahan website
Online services -> driving license related
Choose destination state
driving license -> Service on DL (Renewal/Duplicate...)
must be 15 characters when entering DL number - add zeros after year if not.
Nowadays we have grown used to Google Maps having high resolution satellite images. But they are copyrighted images and the Google credits need to be shown onscreen if we use it in a planetarium show or something like that. Looking at OpenSpace and its data sources, googled for some free sources of hi-res satellite images -
I tried to incorporate frame-serving via avisynth+ to my OCVWarp workflow described here, so that steps 1 and 2 could be conflated and the rendering time could be halved - most of the rendering time is taken up by the CPU usage of the codec, I believe. But it did not work.
The latest avisynth+ files work with the latest Virtualdub - even 4096x4096 videos are rendered with no problems. But creating a frameserver with Virtualdub and pointing OCVWarp to the frameserver causes OCVWarp to crash/not respond. AVFS also did not help.
One of our domains runs their emails on Microsoft365/Outlook, and they were unable to send emails to some other domains they own. Some messages would bounce saying "no such user". It turned out that the recipient domain had been added to Microsoft365 as an additional domain, but no users were added under that domain. Also, that domain was actively being used, with MX records pointing to Google Workspace. So, Outlook or Microsoft365 was not looking at the MX records, but instead looking at the internal users with active directory, and deciding that there was "no such user".
Any other users who had been added to the Microsoft active directory would need to check their inbox on Outlook and not on Google Workspace for any emails sent from our domain on Outlook. This would be relevant for emails being sent from Onedrive, Microsoft Teams, etc.
One of our domain administrators got an email from Microsoft, "Ensure your email authentication records are set up to avoid mail flow issues to third-party email accounts." Apparently, DKIM was not set up properly.
|Microsoft.Exchange.Management.Tasks.ValidationException|CNAME record does not exist for this config. Please publish the following two CNAME records first. Domain Name : ourdomain.org Host Name : selector1._domainkey Points to address or value: selector1-ourdomain-org._domainkey.ourdomain.onmicrosoft.com (This is tested and working fine with dig.)
Host Name : selector2._domainkey Points to address or value: selector2-ourdomain-org._domainkey.ourdomain.onmicrosoft.com (This record returns SOA. I thought there was something wrong with onmicrosoft.com DNS settings. But probably this record is used only when the key needs to be rotated.)
After 90 minutes or so, I checked and found DKIM successfully enabled at that page. The page said that it would take several minutes to roll out the changes, so that DKIM signing would commence after a few minutes/hours.
we were trying to upload special fonts for headings and subheadings in the home page and we could not do it.
Can you pls check if it is related to any file permission issue.
Are you trying to install those fonts locally or use them by linking to Google's servers?
If you are trying to install them locally, you probably may not have the required permissions.
I've now downloaded those fonts and installed them on the wp server locally. Maybe wpastra can pick them up from there. Please check.
(When they said it still did not work, I responded as follows:)
1. Google fonts can be linked in web pages in such a way that the users of the web pages get the fonts
from google and display it on their browsers. If this has to work
properly, there are several things which have to be taken care of.
Example:
3. Another way in which fonts (any font) can be used is to just define the font, without embedding it - in this case, the font will fail to load if the same font is not present in the user's machine.
Without knowing what you are trying, whether the situation is 1, 2 or 3, I cannot help you further.
If you cannot tell me whether it is 1, 2 or 3, I would need access to a test page where the issue is seen.
(And they confirmed that it is working:)
The font issue is resolved, and we can choose the font type from the font family. We found out that we were missing one step for changing the font.
As mentioned in September last year, Stellarium now has some features which enable creation of flybys. Perhaps Saturn is the most spectacular? Tried out creating two fulldome versions suitable for planetariums, one with Saturn visible at the apex of the dome, and another with Saturn near the front side of the dome, suitable for planetariums with unidirectional seating.
One of our servers which used to send a few notification emails to us via postfix stopped doing so - status=bounced (host gmail-smtp-in.l.google.com[142.251.2.27] said: 550-5.7.26 This mail has been blocked because the sender is unauthenticated. 550-5.7.26 Gmail requires all senders to authenticate with either SPF or DKIM.
By default, Kodi on LibreELEC running on Raspberry Pi 4 was detecting our Optoma ZK 507 projector and setting the output resolution to 4096x2160 @ 30 Hz. This was all right, but our 3840x2160 and 1920x1080 shows were showing a black bar at the bottom. Checking for solutions - not the minimise black bars global setting, but instead, setting the resolution from the comments on this thread,
I did not need to make a whitelist or blacklist in Kodi (blacklist is for devices, whitelist is for allowed video modes) - just setting the resolution at Settings --> System --> Display to 3180x2160 @ 30 Hz did the trick. Now, no black bars at the bottom of the FHD or UHD videos.
Another issue was, hiss in the sound output via a generic USB sound output, when amplified by our sound system. We set up a limiter and a gate with our old Behringer MDX 2200 compressor. Then, for further sweetening, I got a second-hand Lexicon Alpha sound device via bajaao.com for approximately one day's salary, and used that for sound output - no more hiss.
For recognizing the Lexicon Alpha, we had to plug it in after the Raspberry Pi booted into LibreELEC and Kodi - on every reboot. After recognizing the device, a one-time configuration change in Kodi --> Settings --> Audio settings --> Audio device , and set the output device to Lexicon Analog (not Lexicon SPDIF in this case).
Trying a direct connection for the Lexicon Alpha onto the RPi (as against a connection via a USB hub) also did not resolve the issue of not recognizing it on boot. So, we're plugging it in after every reboot (on the USB hub for convenience.)
That link also discusses some pros and cons - like no web lookups for huggingchat. So, I looked up how we can make our own chatbot, the full detailed step by step process, how much manual work is involved, and what we need to customize it.
(this seems to be the generic steps without customization)
So, link (2.) seems to indicate that we need to train it using a csv file with sample queries and responses. This is something which, for example, a large number of volunteers could generate. If each one generates questions and answers based on one Discourse, for example, 200 or 2000 people could generate quite a good data set.
We
could use AAAA records (IPV6) instead of A records (IPV4) to avoid
these charges, since Cloudflare would proxy to IPV4 and IPV6. In case we
need to SSH into the servers, and our ISP / router does not support
IPV6, then we might need something like this,
1. Go to your VPC settings and assign a /56 prefix. 2. Go to subnet settings within the VPC and assign a /64 to the subnet. 3. Go to the instance's network interface settings and enable IPv6 address assignment.
Logged in success with browser-rendered terminal on Edge browser on Windows 11.
Possible pain point - Note is that for each login (with a timeout of 24 hours as the default), we have to copy-paste the private key, in PEM format, into the authentication form. PEM format for the private key is a must. OpenSSH format doesn't work - Browser SSH Private Keys not working - Zero Trust / Access - Cloudflare Community.
On one of our servers running Ubuntu 22.04, Wordpress reported upload size is set to 2 MB. Changed to 50M from 2M in the global php settings at /etc/php*/8.1/apache2/php.ini - that has fixed the issue.
We have used two different methods to deal with this - one solution using XOAuth, and one using App Passwords.
1. For some of our Moodle instances which used Google Workspace emails for outbound emails, we could use the in-built XOAuth support as explained here,
So I created two separate sets of credentials for one of the email ids, since the redirect url for each (one production and one development) server would be different as mentioned in the documentation linked from the tracker link above -
server.url/admin/oauth2callback.php
2. For our internal server using ssmtp as mentioned in this earlier post, this procedure (or the use of PHPMailer which supports XOAuth) would not be suitable, since it does not have a public-facing website url.
describes a python solution, but needs refresh, and needs a browser.
)
In passing, the email from google mentions that App Passwords are not going away - "Password-based access (with the exception of App Passwords) will no longer be supported" . We had earlier created App Passwords for use with We can create App Passwords if we enable 2FA. So, we enabled 2FA, created an App Password, and just replaced the password in our earlier ssmtp configuration file with the App Password (without spaces). And that works.
This moodle plugin "Dictionary" requires a one-time free registration to the Merriam-Webster api at https://dictionaryapi.com/ - they currently provide 1000 free lookups per day per key, and 2 free keys per registration. The plugin uses one key for dictionary, one for thesaurus.
Instead of commenting out, used the method mentioned lower down in the thread for Moodle 4.1 and above, by going to Site administration --> Server --> Support contact and changing the option to make it available only for logged-in users. Did the same thing for two of our production instances, too.
Paid tech support by email/telephone - Nettigritty.com
We have found that they usually respond within 3-4 hours.
But
a caveat - they would only extend support for those services for which
you have paid them. For example, if you have paid for web hosting, they
will support you only in web hosting related issues. They may not be
able to help you with open-ended questions or how-do-i-do-this-in-moodle
type of questions, though, of course, you could ask them if they will
do so.
As mentioned in my previous post, SheepIt render farm is a free render farm for Blender Cycles and Eevee engines, based on mutual co-operation - we connect our machines to the render farm to earn points, and can get our projects rendered in double-quick time in return.
Their FAQ page lists answers to most questions one may have about the service. I've uploaded an example of a project which includes the baked cache for particle systems, on Github, here. As mentioned in their FAQ, the steps to do this are:
1. Make all paths relative - in Blender, choose
File --> External Data --> Make All Paths Relative
2. If you need to render a baked fluid/particle system, zip the blender file and the cache directory, ensuring that the cache directory is also set as a relative path. In my case, I had set the cache directory to be inside the project directory to simplify matters. In the project above, this is located in the Physics tab of the Smoke Domain object as seen in the screenshot below. As you can see, the current directory is indicated by a leading double-slash - //.
3. One point to note is that choosing File --> Save As and saving the project to another directory, and then copying the cache directory there, does not work (as in Blender 3.2 on Windows.) This is probably because the file paths get mangled? Anyway, if you want to break up the project into multiple renders due to the bake files being too large, copy-paste the project outside Blender instead, don't use File --> Save As.
4. There is a script available on the Forums for splitting the project, in case you want to automate the process. Or, like I did, you can just choose to bake all, and then copy only the .vdb files in the data subfolder of the cache for a limited number of frames. For the zoomed out view, for frames after 275, this was something like 30-50 frames per project. And choose to render only those frames. Then create a fresh zip file for the next set of frames. And so on.
Here are some project statistics, for the zoomed in project render, which was taking around 3 minutes per frame on my machine:
Summary
Storage used: 971.8 MB
Cumulated time of render: 9h01m
Points spent: 5,767 points
Real duration of render: 1h23m
On reference per frame rendertime: 0m51s
On my machine, the 600 frames would have taken 30 hours to render, and even on the Studio Mac it would have taken 5-6 hours, but SheepIt finished it in less than 90 minutes due to the power of distributed processing.
So, nowadays I leave the sheepit client running whenever I switch on my machine, to accumulate points. A borrowed CPU from VV fitted with the GTX1050 Ti card which I purchased for my research also helps at the office. The 1050 Ti card is only 4% the speed of the standard GPU, but since most of the projects specify GPU, it usually gets some project to render, as against the i5-1235U laptop CPU, which often goes many minutes with "No jobs to render".
And finally, here's a youtube video of the rendered output. This is a "Fulldome" video, meant for projection in a planetarium. (Since this is my first effort, the launch needs more work to look more realistic - currently the motion of the rocket looks a bit cartoonish.)