Sunday, April 26, 2020

GCP costs

Following up on my GCP trials post - the Google Cloud Platform gives per second billing in dollars, so we have to do a bit of arithmetic to know how much it's going to cost. For the free trial of $300 (approx Rs. 22,500 in Indian rupees), the question was, will it last one year with my type of usage - which is, using the GCP VM for compiling and other such tasks, shutting it down use.

I downloaded the transactions listed on GCP as a csv. Found that the 150 GB standard persistent disk (divided as Storage Image and PD Capacity) was costing me around Rs. 25 per day. Other resource usage (like cores, network ingress/egress) would depend on how much I'm working on the VM. My maximum usage in a day was around Rs. 500, with 4 cores, 15 GB RAM, left on overnight.

Rs. 2.36 per hour per core x (4 cores) is nearly Rs. 10 per hour.
Rs. 0.3 per hour per GiB x (15 GB RAM) is nearly Rs. 5 per hour.
So, running my VM would be around Rs. 15 per hour.

Network egress around Rs. 5 per GB. But this is the more expensive APAC egress - peering Americas network is lower. Downloading Microsoft's 20 GB VM image did not even show up in the transactions. Maybe it is in the same data centre? My total network related expenses so far this month have been only around Rs. 12 in spite of downloading at least 25 GB of VMs and ISOs.

With minimal usage, Rs. 40 per day - the free credit will last more than a year. So, I can use the VM quite liberally.

Overall, expensive compared to a dedicated server, but attractive when it is a free trial. Thanks, Google.


Saturday, April 25, 2020

some git commands and github

For the error
Cannot pull with rebase: You have unstaged changes.
from https://stackoverflow.com/questions/23517464/error-cannot-pull-with-rebase-you-have-unstaged-changes
(when trying to do
git pull
before
git push
)

If the pull complains that you have changes,
git stash
git pull
git stash pop

git add relative/path/to/filename/which/has/changed
OR
git add directory/name/which/has/changed
git commit
git push

OR
git add -u # for adding changes to all previously staged files
git commit
git push


Atlassian's git cheat sheet is also useful, especially the first page.
git init
OR
git clone
to initialize local repo.

To copy a folder from one repo to another on github, a somewhat elaborate process, if you want to preserve histories.

And if you just want to download a single directory from a repo on github, this post has a nice solution using github's svn features - navigate to the folder you want on github, copy the url, change tree/master to trunk in the url, and pass it to svn.
For example, to download the directory https://github.com/hn-88/OCVWarp/tree/master/octave
the command would be
svn checkout https://github.com/hn-88/OCVWarp/trunk/octave

Edit: updated with pushing git tags.
For deleting a remote tag, if the name of the tag you wish to delete was v1
git push --delete origin v1

More basics of git and tags from here,
git clone
cd to the repo
To view existing tags,
git tag
To create a new tag called v2.11
git tag v2.11
To push the tag to remote,
git push origin --tags

But AppImages created with Travis-CI even with this git push --tags, still show up with the commit hash instead of the tag in the filename. Will need to look into it.

Monday, April 20, 2020

various tries to compile old version of VLC

Tried various methods to compile vlc-warp-2.1, which is based on VLC 2.1, on newer distros, or create an AppImage for VLC. Didn't succeed. Documenting various things I tried.

1. Doing a diff between VLC and vlc-warp - doing a diff between two directories. A very large number of files need to be modified, so I didn't take it  further. This would probably be the way to go - make similar changes to a newer VLC release.

2. Trying to make a vlc-warp AppImage - the AppImage complains that the plugins folder is not found. Tried this link for plugins path.

3. Tried installing VLC to a folder - https://wiki.videolan.org/VLC_configure_help/

4. Making a copy of the plugins folder in ~/.local/share didn't work.

5. Tried a Windows cross-compile on a recent Linux Mint. Used the wayback machine and also recent w32 compile instructions. With this forum post, for live555
apt-get install liblivemedia-dev
apt-get install libdc1394-22-dev
apt-get install libdvdread-dev

--disable-lua because lua5 was already installed.

apt-get install libshout-dev

libmad0-dev

But - configure: error: libavcodec versions 56 and later are not supported yet.

So, have to go back to Ubuntu 12.04 to compile VLC 2.1 (and vlc-warp 2.1). And I couldn't get a VLC AppImage to work either.

Friday, April 17, 2020

finding linked libraries

I had missed out opencv_ffmpeg349.dll when I initially released OCVWarp for Windows. At that time, I had copied over all those dlls which the exe complained about not finding, till it started running. But apparently a missing ffmpeg dll causes it to fail silently.

On Windows, listdlls PID got me the list of dlls I needed to add. But that needed me to run the executable, and run listdlls while it was running.

On Linux, there is ldd nameofexecutable.bin which can be used without running the executable.

On Windows, this post seems to say that with a hex editor, we should be able to find all the dlls. But I did not find the ffmpeg dll with a hex editor in OCVWarp. Maybe listdlls exists because of this? But then, ldd can run on Windows - so that may be the best option next time.

Airtable versus Google Sheets

I wondered how Airtable differs from Google Sheets, and googling threw up this informative post. Perhaps a bit subjective, but still.


Thursday, April 16, 2020

building OCVWarp with MinGW

In my earlier post, I mentioned how I'd compiled OpenCV 3.4.9 using MingGW (32-bit) on Windows 10. In this post, I give the details of the changes I needed to make in order to build a working OCVWarp using this opencv.

Strangely enough, the reason for OCVWarp to silently exit for earlier builds, which were deemed successful on appveyor, was the presence of this line,
const bool askOutputType = argv[3][0] =='Y';

Removing the argv from this line and making it just
const bool askOutputType =0
made the build go to the next problem point, which is that OpenCV on Windows needs paths which have \ escaped. So, C:\\path\\to\\video.mp4 and so on. Fixed that with this hack of creating a function with switch / case, then the build works on Windows 10, but doesn't find the video file on Windows 7. 

Also, to build with my compiled opencv, needed to specify the path to the link libraries, because otherwise it would link to the Visual Studio C libraries which were also present, and would cause linking errors. The easiest way I found to do this was to run the cmake gui, and choosing the Add Entry button to add target_link_libraries path to mingw's bin directory.


Then configure with MingGW Makefiles generator, followed by
cd build
make OCVWarp.bin
on the command-line, does the build properly. Interestingly, here the target name is case-sensitive :)

The pre-build dlls for OpenCV 3.4.3 which I had used for Appveyor builds have extremely limited support for output codecs. Probably they are built without ffmpeg support. So, if someone wants to build OCVWarp for x64 on Windows, it would be better to build OpenCV and then build OCVWarp rather than use the pre-built OpenCV.

Wednesday, April 15, 2020

building opencv 3.4.9 Windows 32 bit version with MinGW

I read a lot of confusing and contradictory posts on installing MinGW, MYS2, CMake etc on Windows and using them. Did not want to create any elaborate manually installed toolchain like
https://here-be-braces.com/a-windows-cpp-ecosystem-without-visual-studio/

Thought I could just run the powershell script from Mudlet, which gives a way of building it on Windows 7 - https://wiki.mudlet.org/w/Compiling_Mudlet#Compiling_on_Windows_7.2B

That worked quite well (on a VM running on GCP) upto installing QT. Before that, I had to enable running powershell scripts with
Set-ExecutionPolicy Unrestricted
or else the machine would not run unsigned powershell scripts. Also, I disabled Windows Defender Virus protection from Windows settings, since the virus scanner was running for every make, configure etc and slowing down things.

After installing QT, the installation script seemed to stop. Hitting Ctrl-C made it start again, and it installed a few more tools till luarocks. There, it gave some error and exited. But that was sufficient for me to try and build opencv.

Downloaded OpenCV 3.4.9 zip file, extracted it to C:\opencv3.4.9,
cd c:\opencv3.4.9
md build
cd build
cmake ..

But of course, that did not work so easily.

I had to set the CC environment variable, which I did temporarily with
$env:CC = "C:\Qt\Tools\mingw730_32\bin\gcc.exe"
since otherwise I would need to open another shell and all the env variables created by the mudlet install script would be lost. Then there were more errors about not specifying the MAKE and bitness, so finally the working cmake command was

cmake -D"CMAKE_MAKE_PROGRAM:PATH=C:\Qt\Tools\mingw730_32\bin\make.exe" -G "MinGW Makefiles" ..

That itself took a good five to ten minutes. Then make took another two or three hours! Looks like ffmpeg gets downloaded and installed during the build process.

At around 48%, the build exited with the sprintf error mentioned here. So I added the line
#define NO_DSHOW_STRSAFE
just above
#include "DShow.h"
in the file sources\modules\videoio\src\cap_dshow.cpp
went back and ran make again, and it started from where it left off (noticing that the earlier files had already been compiled.) I did not do multi-threaded compiling, since that also might give errors, as mentioned here.

Now the example files and the opencv test files created in the bin folder run, but OCVWarp still does not work. Silently exits on Windows 10, "Could not open video file input.mp4" error message on Windows 7 for OCVWarpcmd. Is it due to some codec missing issue? (Edit - it was a strange issue - my separate post on that is here.)

For building OCVWarp, I had to add some environment variables, since this was a new session after a restart. Paths to mingw's bin folder and cmake's bin folder added to the PATH variable,



the CC and CXX environment variables set to gcc.exe's path,


and the OpenCV_DIR variable set to the install folder, which in my case was C:\opencv3.4.9\build\install


For running on other machines, along with the opencv dlls, the runtime dlls from mingw's bin folder also need to be copied. libstdc++-6.dll, libwinpthread-1.dll, and libgcc_s_dw2-1.dll in this case.

Edit - 16th April - updated with link to building OCVWarp, and corrected the generator name - not NMake Makefiles, but MinGW Makefiles.

Tuesday, April 14, 2020

Windows mingw compile notes

Link dump of issues faced and their resolutions for compiling with mingw. Edit: The final product is posted at https://hnsws.blogspot.com/2020/04/building-opencv-349-windows-32-bit.html and the related github repo is https://github.com/hn-88/opencvpatch

1. mingw compile and log on to appveyor build machine - some tips are at https://github.com/symengine/symengine/blob/master/appveyor.yml

2. avoiding windows paths backslash problems in python - using raw strings - https://lerner.co.il/2018/07/24/avoiding-windows-backslash-problems-with-pythons-raw-strings/ 

3. For specifying make https://stackoverflow.com/questions/6141608/cmake-make-program-not-found

Setting env var with powershell https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-powershell-1.0/ff730964(v=technet.10)?redirectedfrom=MSDN

need to open a new shell if setting on machine as a whole and not for session.

manually may need to set as per https://stackoverflow.com/questions/41918477/how-to-build-opencv-3-2-0-with-mingw-on-windows

failed to set bitness, solved with -G as below.

4. https://answers.opencv.org/question/122696/is-it-possible-to-build-opencv-320-libraries-using-mingw-for-a-32bit-windows/

working cmake command was

cmake -D"CMAKE_MAKE_PROGRAM:PATH=C:\Qt\Tools\mingw730_32\bin\make.exe" -G "Nmake Makefiles" ..

from

https://stackoverflow.com/questions/48624415/how-to-fix-cmake-error-in-cmakelists-txt-generator-nmake-makefiles-does-not-sup 

 (by creating a build directory inside opencv directory, cd build.)

Done in powershell. 

Failed with CMake Error: Could not create named generator Nmake Makefiles

Went forward with MinGW Makefiles.

4. For error with sprintf, http://www.programmersought.com/article/3109513497/ says: modify E:\OpenCV_3.3.1\opencv\sources\modules\videoio\src\cap_dshow.cpp, at

#include "DShow.h"
Add a line above that line
#define NO_DSHOW_STRSAFE like:

#define NO_DSHOW_STRSAFE
#include "DShow.h"

Then, 

make

and it started from where it left off.

Edit: The final product is posted at https://hnsws.blogspot.com/2020/04/building-opencv-349-windows-32-bit.html and the related github repo is https://github.com/hn-88/opencvpatch

Monday, April 13, 2020

running Visual Studio 19 VM on Google Cloud Platform

Microsoft offers free Windows 10 90 day trial virtual machines for Edge and a larger Enterprise virtual machine set with Visual Studio, 60 day trial. To run these virtual machines, a computer with 100 GB or more free space, 2 or 4 CPU cores, 8 or 16 GB RAM etc would be good. In fact, the Visual Studio VMs are 20 GB downloads which take up more than 40 GB in use. So, those of us with low-end machines would be frustrated if we tried to use them - on external drives and so on - with slooow performance due to lack of RAM and slow disks. This is where cloud server free trials help. Earlier I had written about Microsoft's Azure free trial, this post is about Google Cloud Platform (GCP).

The one year free trial needs a fresh google account and a credit/debit card (in India), and gives you one year / $300 credits. 4-5 hours of work yesterday on a dual-core 8 GB VM cost me around $1.

Windows Server images are not available under the free trial credits. If you want to run Windows, you have to run it as a nested VM - a VM under a VM. Nested VM support on Google Cloud Platform has some restrictions - otherwise virtualbox complains that VT-X is not available. But the image I created worked - the Microsoft Virtualbox VM download, running on Ubuntu 18.04 obtained from GCP. The procedure I followed to get nested VMs working was basically this. Instead of downloading and installing the gcloud sdk, we can directly run the cloud shell from a browser, especially Chrome. With the following caveats for Windows nested VMs.

  1. sudo worked only for the ssh in browser window (without asking for any password) which could be opened from GCP. On the ssh opened in a terminal window on my local machine, it asked for user's password (which I did not know. Is it the same as the google account  password?). This ssh session was opened with manually copied ssh keys. So, just use the SSH from browser window.
     
  2. install the latest virtualbox - the downloaded image needs Virtualbox 6 and will not work with the default Virtualbox 5 installation from Ubuntu 18.04. If you are uninstalling Virtualbox 5 and installing Virtualbox 6, you need to sudo apt-get remove virtualbox-dkms also
  3. increase size of disk to 150 GB and change the machine type to have more RAM. This can be done with the Edit button in the VM instances page of GCP. If disk is too small, virtualbox complains of "invalid argument" and crashes. Plan for free disk space 3x the size of the downloaded image at the very minimum.
  4. I followed these instructions for setting up VNC, with the default local client Remmina. My preferred desktop was Cinnamon, but it kept crashing. Maybe because I did not edit the startup script. Then tried xfce. But the panel doesn't work, in spite of these instructions. But I can right-click to get a menu and run applications on the desktop.
Edit - I've now added a post on GCP running costs.

Sunday, April 12, 2020

announcing offline tools for warping

My post in the small_planetarium Yahoo group
(Edit - Yahoo groups closed down in Dec 2020)

Paul Bourke had recently announced a an offline pre-warping tool for Mac which can do warping for videos similar to his old tgawarp tool.

For non-Mac users, here is a tool which does a similar job using OpenGL, an update from my old GL_warp2Avi, now updated to use frame buffers and to run on Linux (and Windows, though I have not built on Windows yet.)
Here is the link to the readme, which has some usage info,
and the Linux binary is available at

I believe similar performance can be obtained with only CPU, without OpenGL, because for writing to disk, the frame-buffer again has to be copied to CPU memory and the video encoding is the slowest step. So I have incorporated this warping into OCVWarp also. Linux binary available at
with usage info in the readme at

In case anyone finds any image quality issues with the OCVWarp version, please let me know, since I have not tested extensively for anti-aliasing quality.

COVID-19 Global summary graphs

Some nice data visualizations at

Friday, April 10, 2020

deploying build artifacts from travis to github

Till now I had used the fairly easy to understand deployment strategy of uploading to transfer.sh, downloading from there, and then manually uploading to create github releases.
after_success:
  - curl --upload-file GL_warp2mp4*.AppImage https://transfer.sh/GL_warp2mp41.10-x86_64.AppImage


Appveyor has its artifacts page, so a similar strategy works there.

But a couple of days back, transfer.sh was (temporarily) down, so deploying directly to github from travis seemed attractive.

But did not get it to work, until now. Some notes -
  1. Directly entering github personal tokens in a commit on github results in github immediately revoking the token.
  2. Travis has rolled out v2 of their deployment code, but maybe still needs the edge: true directive to reliably use v2 instead of v1.
  3. Currently working code is here. The environment variable mytokenname has to be set in the project's settings on travis.
  4. The draft setting ensures that the deployment is visible only to me - I can edit the draft and create a release. Also, because of this, when I create a release, that commit does not create a new deployment in an infinite loop.
  5. The file_glob setting allows the use of wildcards in the file option.

deploy:
provider: releases
token: $mytokenname
file_glob: true
file: h*.bin
skip_cleanup: true
draft: true
edge: true
after_success:

Thursday, April 09, 2020

abandoning Windows 7 builds

Yesterday, I thought I would try Windows builds for OCVWarp, GL_warp2mp4, etc using MinGW and the procedure outlined in https://wiki.mudlet.org/w/Compiling_Mudlet#Compiling_on_Windows_7.2B

But it turns out, quite a few GB are required on C: drive, especially QT, and everything installs there by default. The machine I'm currently working on has only 4 GB free in C: drive. Tried doing a manual install to H:, too painful. So, it would be easier for those running old Windows versions to boot to a live cd or live USB of Ubuntu or Mint or something, and use the AppImage instead!

I might do appveyor builds and test on some Windows 10 VM, if that works without too much trouble. Possibly a good example for appveyor yml file is here. And more about the MinGW build ecosystem here.

Edit: The OCVWarp build on Windows with MinGW is detailed at
https://hnsws.blogspot.com/2020/04/building-ocvwarp-with-mingw.html

More notes on OpenGL programming

Following up on my earlier post about OpenGL programming. GL_warp2mp4 is now functional.

Checking the speeds achieved by pboUnpack on my machine, 2.8 fps for a 2048x2048 texture if GL_BGR is used, but increases to 29 fps if GL_RGBA8 is used. So, the 3 fps I get with GL_warp2mp4 with hi-res files seems to be limited by the writing to fbo.  If I write directly to screen and not to a texture on an fbo, I get faster frame rates. But that, of course, has the limitation that the destination has to be of lower resolution than the screen size.

The pixel format GL_BGRA is unfortunately not supported internally for FBO (at least on my system). So, GL_RGBA8 has to be used, which is apparently the fastest we can get.

pboPack shows between 6 and 9 Mpixels/s on my machine, both with PBO on and off.
../bin/pboPack
Video card supports GL_ARB_pixel_buffer_object.
Transfer Rate: 0.0 Mpixels/s. (0.0 FPS)
Transfer Rate: 7.7 Mpixels/s. (30.7 FPS)
Transfer Rate: 8.6 Mpixels/s. (34.4 FPS)
Transfer Rate: 9.0 Mpixels/s. (35.8 FPS)
Transfer Rate: 8.4 Mpixels/s. (33.5 FPS)
Transfer Rate: 8.6 Mpixels/s. (34.3 FPS)
Transfer Rate: 8.3 Mpixels/s. (33.2 FPS)
Transfer Rate: 8.8 Mpixels/s. (35.3 FPS)
PBO mode: off
Transfer Rate: 8.5 Mpixels/s. (34.0 FPS)
Transfer Rate: 6.5 Mpixels/s. (25.9 FPS)
Transfer Rate: 6.6 Mpixels/s. (26.5 FPS)
Transfer Rate: 6.7 Mpixels/s. (27.0 FPS)
Transfer Rate: 6.8 Mpixels/s. (27.4 FPS)
Transfer Rate: 6.7 Mpixels/s. (26.6 FPS)
PBO mode: on
Transfer Rate: 7.6 Mpixels/s. (30.4 FPS)
Transfer Rate: 8.4 Mpixels/s. (33.6 FPS)
Transfer Rate: 8.7 Mpixels/s. (34.9 FPS)
Transfer Rate: 8.9 Mpixels/s. (35.5 FPS)
Transfer Rate: 8.8 Mpixels/s. (35.4 FPS)

What I get with GL_warp2mp4 is around 3.5 fps with 1080p output - that comes to around 6.9 Mpixels/s. So, probably that's all I can expect.

Probably I will get better performance by avoiding OpenGL and the graphics card and going the remap way as in OCVWarp. The reason being, the data transfer to and from the video card seems to be the bottleneck, and CPU based remap computation is faster.

resources for free Earth satellite photos

Via Mapton, via AppImageHub, free to use images at https://s2maps.eu/
10 metre resolution cloudless whole Earth imagery!

Maptiler indicates that USGS Earth Explorer catalog gives us free date-wise data. For a small area which fits in a single tile, 500+ images show up, dating from 2015, for Sentinel2 data. Can be used for lots of interesting things :)

Apparently Google Earth has a historical animation feature. This is of course, free for personal use, but not free in terms of licensing. Here's a video about exporting video from Google Earth. But that's only on the downloaded version. The web version does not have the video export etc. What is required is the "Google Earth Pro" desktop version. Which is still free for personal use.

(On Linux, the latest version seems to have some issues, as mentioned here. I too had the blank screen issue. Installing the 7.1 version deb made it work.)

My location has around 17 historical images, from 2004. Most from 2015 onwards.



Animated gif from 3 images, created with ezgif.com,


audio processing - normalize to RMS with Reaper

Necessity is the mother of invention. Being unable to access my old laptop with the tools I needed to process audio files using my old method with Sound Forge, looked around for ways to do it on Reaper.

 Found some very good ways to do it using SWS Normalize filters if you are on Mac or Windows.

And for Linux, using just the built-in filters, found this excellent video which explains how to do it manually using a compressor and a limiter, and watching the master level meter - Setting the LUFS level in Reaper.

In my case, I created some template projects with the following, (on Reaper v6.05):

  1. View -> Floating Mixer Master. Right-click on the Mixer Master VU Meter, change the settings to Peak+RMS, and increase the window size to 5000 milli-seconds. Display offset -10 dB. (so that an RMS level of -10 dB will show up as 0 dB)
  2. Added the ReaComp filter, with my preferred very fast attack and very slow release settings, increasing the output level and removing auto-makeup -
  3. This is followed by the Master Limiter filter, with default settings.
  4. Simply reducing the threshold on the Master Limiter increases the RMS value - play the audio and adjust. The 5 second window means that you have to play at least 5 seconds for the level to stabilize. 
  5. For those sections of the audio track which are louder, split them into separate items by placing the cursor at the split point and pressing the 's' key. On the separate item, right-click, item properties..., and reduce the item volume. We can also hit the Take FX button at the bottom of this dialog box, and apply per item filters.
  6. If necessary, do extra fades or crops at the end. And then we can render directly to 44.1 kHz mp3 96 kbps mono CBR with quality setting q=2.