Tuesday, November 19, 2019

brief review of machine transcription - old recording to text

Tried out automated speech recognition from Google. Though there is a lot of improvement over the last couple of decades, still a long way to go. 

1. Google's paid product is priced at around Rs. 100 per hour, with the first hour free. https://cloud.google.com/speech-to-text/
 Google's version first:

Followed by my transcript.


You can see that it has made quite a few mistakes, and more importantly, missing entire sections. The skipping of sections would be the big  no-no for me. (I take around 3 minutes to transcribe one minute of English from this old recording).

2. Using Google Docs "voice typing" free feature - 


Monday, November 11, 2019

some simple light-weight webservers

On linux - lighttpd is available widely, for example in ubuntu's repos. Some examples given at https://unix.stackexchange.com/questions/32182/simple-command-line-http-server are SimpleHTTPServer in Python, node.js (lightweight compared to apache) and many many others. And for windows, this page lists tinyweb which can even run cgi.

trial embed of audio and pdf

Testing an embedded audio file with two embedded pdfs. All hosted on Google Drive, with embed code obtained from More items -> Open in New Window -> More items -> Embed.

Indic input enabled using Google Acccount Settings -> Language -> Input Tools -> Edit
https://www.google.com/inputtools/services/products/account-central.html
Then choose File -> Language to choose the required language within Google Docs.





warping on Linux - initial ffmpeg trials without a final solution

I still use the tool made in 2014 to pre-warp fulldome planetarium shows for our planetarium's Blu-ray-player-based playout system. But that tool's main limitation seems to be that it works only on Windows XP. It does not seem to work even on Windows 7 - I tested with RS's laptop.

In the search for alternatives for future-proofing, I had a look at ffmpeg's Remap filter. If a suitable map is made, it should be possible to use this filter to do the warping in a platform independent way.
ffmpeg -i /path/input.mp4 -i front_kodak_sp360_1440_xmap.pgm -i front_kodak_sp360_1440_ymap.pgm -lavfi remap out.mp4

Preliminary test went at around 8 fps, 1 minute video took 4 minutes to encode.

out.mp4 was of a similar format to input file, but not exactly.

mediainfo out.mp4

Testing to see if opengl will be faster, tried with just re-encoding to divx,
ffmpeg -i /path/input.mp4 -c:v libxvid out.mp4
did the 1 minute video in 2 minutes, but quite low quality for the highly changing scenes. That would be 15 fps.

ffmpeg -i /path/input.mp4 -c:v libxvid -qscale:v 100 out.mp4
did not make much of a difference.


Some info about OpenGL filters with ffmpeg
Piping to screen works, albeit not in realtime, 0.25x perhaps.
ffmpeg -i /path/input.mp4 -i front_kodak_sp360_1440_xmap.pgm -i front_kodak_sp360_1440_ymap.pgm -lavfi remap -f matroska - | ffplay -

inspired by

Using rawvideo, (raw RGB is not supported in matroshka, so using NUT), went to 0.68x, but in black and white!
ffmpeg -i /path/input.mp4 -i front_kodak_sp360_1440_xmap.pgm -i front_kodak_sp360_1440_ymap.pgm -lavfi remap -c:v rawvideo -f NUT - | ffplay -

with AVI instead of NUT, colours are mismatched and not antialiased?!

Later, went to try avisynth instead. That is described in another post.




Friday, November 08, 2019

shared folder new behaviour

Now when I move something out of a shared windows folder from our Linux server, I get this message:

mv: listing attributes of ‘THE_FILE_NAME.mp3’: Numerical result out of range

After the file is moved to a local drive on the Linux server, moving it elsewhere doesn't give this error. Maybe this behaviour is something to do with this,

https://wiki.archlinux.org/index.php/File_permissions_and_attributes

"mv silently discards extended attributes when the target file system does not support them."

Edit: Most probably this message is due to group name or user name not being mapped to GID or UID properly, I guess - as discussed at https://serverfault.com/questions/811997/mounting-a-cifs-share-fails-with-numerical-result-out-of-range

Thursday, November 07, 2019

link dump - possible ways to port to GL_warp2Avi

Exploring ways to port my old tool GL_warp2Avi to cross-platform code - 


links to Linux version, which uses avifile >=0.60


Edit - a few months later, I've created GL_warp2mp4 using OpenCV and GLUT and the generalized OCVWarp based on OpenCV which can do what GL_warp2Avi could do.

virtual tours with 360 photos, and panoramas

Thought about creating a virtual tour of the planetarium with the Insta360 Air camera. This camera works reasonably well in good light, but cannot capture in low light as is the case inside the planetarium theatre - severely pixelated with the lights on, and completely dark when trying to capture the live planetarium show.

Saw some tutorials using cupix and google tour creator, then made one with google tour creator,
https://poly.google.com/view/6fr4ibdUbv0

Added the same images (after resizing to 4K) to google street view using the street view app. To change the directions in which the 360 images are pointed, under Edit location, the photo can be rotated by swiping while keeping another finger on the directions (N, NE, E etc.) shown below, to keep the directions steady. A bit fiddly. Positioning was also a bit fiddly, since the zoom level on the positioning map was not sufficient in my case.

Made a couple more panoramas, both using the Panorama feature in the stock camera app on my LG Q6 Android phone and using the street view app. The street view app tries to stitch it into a 360 image, so requires a very large number of photos.
Shot with the Google Street View App, which needs you to align the phone to a shown circle and click repeatedly.

Shot with the Panorama feature of the phone's camera App, which needs you to just wave your phone from left to right or vice versa. 
Also shot with the Panorama feature on the phone camera. A passerby created an artifact, so repeated below. 




stackoverflow gyaan on when to use GPU

getting a human being with HDFC phonebanking

It seems a bit difficult to get a human being using the HDFC bank tollfree number. One possible workaround is - choose the "Loans" option. If that option does not appear on the first menu, try the "Report stolen card" option, and when they ask for confirmation, choose "2 for any other services" and then "Loans". After a human being answers, you could tell them what you want, and if they say that they need to transfer your call, agree.

Thursday, October 31, 2019

GPU and CUDA explorations with OpenCV


Bscancompute
    While loop 9.01159 sec.
      While loop 12.5376 sec.
        While loop 13.7008 sec.

            2nd run
              While loop 9.03921 sec.
                While loop 8.88572 sec.
                  While loop 8.32227 sec.

                      Test wo FFT,
                        Before while loop 0.0226007 sec.
                          While loop 3.82644 sec.
                            While loop 3.6609 sec.
                              While loop 3.07903 sec.

                              • A simple sample program, to perform a 2D DFT and inverse, gave the following results on running more than once -
                              • samplecudadft,
                                480x360 fft and inv fft
                                DFT and inverse, with upload/dl 0.255261 sec.
                                (after running several times).

                                2400x360,
                                DFT and inverse, with upload/dl 0.261917 sec.

                                CPU
                                E:\OCT\opencvcuda\OpencvCuda\x64\Release>samplecudadft
                                DFT and inverse, on CPU 0.0144597 sec.
                                DFT and inverse, with upload/dl 0.260429 sec.
                              • bottleneck seems to be FFT "planning" and not upload download.

                                E:\OCT\opencvcuda\OpencvCuda\x64\Release>samplecudadftDFT and inverse, on CPU 0.0146046 sec.
                                DFT and inverse, with upload/dl 0.263986 sec.
                                DFT and inverse, without upload/dl 0.260521 sec.
                                and running it a 2nd time,
                                DFT and inverse, 2nd time without upload/dl 0.00618453 sec.
                              • Even when called as a function, quite fast after the first time.
                                DFT and inverse, as a function 0.00678208 sec.
                                DFT and inverse, as a function 0.00605592 sec.
                                DFT and inverse, as a function 0.00654344 sec.
                                DFT and inverse, as a function 0.00608254 sec.
                                DFT and inverse, as a function 0.00651554 sec.
                                DFT and inverse, as a function 0.00623649 sec.
                                DFT and inverse, as a function 0.00607195 sec.
                                DFT and inverse, as a function 0.0063154 sec.
                                DFT and inverse, as a function 0.00597894 sec.
                                DFT and inverse, as a function 0.00685649 sec.
                                DFT and inverse, as a function 0.00608895 sec.
                              • Will probably need to optimize based on
                                https://docs.opencv.org/master/dd/d3d/tutorial_gpu_basics_similarity.html
                              • Currently, with upload / download and variable assignment not optimized,
                                on cpu,
                                While loop 9.15065 sec.
                                on gpu,
                                While loop 9.34765 sec.
                                where variables were initialized, 64 to 32 conversion and back were included.

                                More important, the Bscan image shows up a bug.

                              Friday, October 25, 2019

                              a workflow for equal loudness RMS normalization on Adobe Audition

                              Sat with the Telugu team and worked out the following procedure for increasing and standardizing the RMS volume of their audio to -12 dB in Adobe Audition.


                              1. Match the volumes in the relevant regions to around -12 dB peak.
                              2. Do Noise Reduction by selecting a region of audio without noise, and then using that as the noise to be removed, around 70% or 80%.
                              3. Noise Gate at -60 dB - the threshold to be tweaked depending on audio's background noise level.
                              4. Choose either L or R track and mute the other track if relevant audio is only on one track, then convert to Mono.
                              5. Multiband compression using the Broadcast preset, and decreasing the thresholds by -20 dB, with Brickwall On.
                              6. Reverb filter if required.
                              7. Total RMS to -18 dB.
                              8. Hard Limiter at -6 dB.
                              9. Normalize peaks to 100%.


                              Tuesday, October 22, 2019

                              UTF-16 LE with BOM and compiler errors, warning: null character(s) ignored

                              By some fluke, the source code which I had edited on Windows got saved on github as UTF-16 LE with BOM (as reported by Geany). This was not compiling on Linux, which gave lots of warnings,

                              warning: null character(s) ignored

                              Googling got the UTF to UTF convertor on www.fileformat.info which converted the file properly to UTF-8. Problem solved.

                              Saturday, October 12, 2019

                              clearing up GMail used space

                              Change of 16.21 GB to 16.11 GB by deleting 8893 conversations from Trash, most of them short text-only email notifications.

                              100 MB for 10,000 short text emails, or around 10 kB per email.

                              So, probably it does not make sense to delete small emails - deleting attachments would clear up much more space.

                              Monday, October 07, 2019

                              X windows issues after CUDA install, resolved

                              Since my machine is supposed to have a GeForce GT 330M according to lspci, I thought I could try out the old version of CUDA SDK - apparently version 6.5 is the last one to support the GT 330M. The deb was supposed to be for Ubuntu 14.04, and I was running Linux Mint 18.1 based on Ubuntu 16.04. The deb and the apt-get install ran without any complaints. But ...

                              The build of examples worked, but the the cuda compatible nvidia card was not detected. And, after a restart, X did not come up.

                              Possible culprit was the NVidia driver for the older 14.04 kernel, which probably needed to be removed and so on. After some googling and some trial and error, the following method worked -

                              sudo apt purge nvidia*
                              sudo apt-get update 
                              sudo apt-get autoremove
                              sudo apt-get upgrade
                              sudo apt-get install --reinstall xserver-xorg-core
                              sudo apt-get install --reinstall xserver-xorg-video-nouveau

                              and a reboot. 

                              Sunday, September 22, 2019

                              WhatsApp notification issue and solution

                              On my LG Q6 phone, running Android 8, there was a problem of WhatsApp notifications arriving only after I opened WhatsApp. WhatsApp voice call notifications were also not coming through. After some digging, found the setting which made the difference - under Battery -> Power saving exclusions, allowed WhatsApp as an exclusion. Problem seems to be solved.

                              Saturday, September 14, 2019

                              tutorial on 360 to dome with Vuo on Mac

                               In the second half of this tutorial, Paul talks about warping 360 videos to dome -http://paulbourke.net/dome/vuo/

                              Edit: Can now use OCVWarp and do this ... 

                              Thursday, September 12, 2019

                              Saturday, September 07, 2019

                              increasing font size in Octave legend

                              Some figures created in Octave had very small font sizes when added to the LaTEX manuscript. Trying to fix the text with Inkscape led to kerning issues similar to this post. So, found these solutions at
                              https://stackoverflow.com/questions/1532355/increase-font-size-in-octave-legend

                              Used commands like
                              print ("j0exp.eps", "-depsc", "-F:20")


                              Tuesday, September 03, 2019

                              starting offline processing in a new thread - C++

                              Tried various options to begin processing in a new thread for BscanFFT, without locking the UI - 

                              Just
                              std::system(offlinetoolpath);
                              // - this causes the BscanFFT program to wait for the offline tool to finish

                              Then,

                              pid = fork();

                              if (pid == 0)

                              {

                              // child process

                              std::system(offlinetoolpath);

                              }

                              else if (pid > 0)

                              {

                              // parent process

                              }

                              gives the error

                               [xcb] Unknown sequence number while processing queue

                              [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called

                              [xcb] Aborting, sorry about that.

                              BscanFFTspinj.bin: ../../src/xcb_io.c:259: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.

                              Apparently, system specific is safer, http://www.cplusplus.com/forum/beginner/13886/

                              "Actually, the most important disadvantage of system() is that it introduces security vulnerabilities. Using a system-dependent method (that skips the shell) is always safer.

                              In Windows, it is CreateProcess(), which is really straight-forward to use.

                              In POSIX (Linux, etc) it is a combination of fork() and one of the exec() functions.

                              http://linux.die.net/man/2/fork

                              http://linux.die.net/man/2/execve

                              http://linux.die.net/man/3/execv

                              These are also easy to use, but there are some caveats. Follow the example codes."

                              execve with the example code gave
                              (show:9323): Gtk-WARNING **: 10:47:16.601: cannot open display:

                              https://stackoverflow.com/questions/2437819/gtk-warning-cannot-open-display-when-using-execve-to-launch-a-gtk-progra/2437850#2437850

                              suggested using execv instead. This worked.


                              external hard disk prices

                              From amazon.in, current external USB hard disk prices which I collected for N -
                              1 TB = Rs. 3500 - per TB cost 3500
                              2 TB = Rs. 5100 - per TB cost 2550
                              3 TB = Rs. 6200 - per TB cost 2067
                              4 TB = Rs. 8100 - per TB cost 2025

                              The 3TB drives seem to be attractively priced due to the sharp drop in per TB price as compared to 2TB drives. 


                              Monday, September 02, 2019

                              gimp 16-bit caution

                              Gimp 2.8 opens 16-bit png files, but saves as 8-bit tif or png. Octave 4.2.2 on the other hand, does write 16-bit png with imwrite. Octave's behaviour depends on shared libraries, too, of course.

                              On Linux,
                              file filename.png
                              gives information about the file, like bit-depth and so on. 

                              Sunday, August 25, 2019

                              smorgasbord of programming tips and ideas - OpenCV etc

                              Here is a collection of links for various implementations of OpenCV-based medical imaging software -

                              1. CUDA and OpenCV - just need to compile OpenCV on a machine with a Cuda compatible graphics card. My old machines have cards which are only supported by older versions of the CUDA SDK. Can expect a 180% speedup as per some papers.
                                https://answers.opencv.org/question/62955/cudadft-speed-issues-too-slow/
                                indicates that the Matlab times were wrong.

                                https://opencv.org/cuda/
                                https://developer.nvidia.com/cuda-gpus and legacy gpus at
                                https://developer.nvidia.com/cuda-legacy-gpus
                                https://developer.nvidia.com/cuda-toolkit-archive and supported GPUs at
                                https://en.wikipedia.org/wiki/CUDA#GPUs_supported

                              2. A nice class to capture from Spinnaker SDK to OpenCV Mat, running in a separate thread - https://github.molgen.mpg.de/MPIBR/SpinnakerCapture
                              3. Writing an OpenCV Mat file to a binary format on disk - Lots of options mentioned at
                                https://answers.opencv.org/question/6414/cvmat-serialization-to-binary-file/
                              4. I had a thought of trying DICOM writing. But it turned out to be quite complicated to write DICOM, since so many parameters need to be initialized.
                                https://stackoverflow.com/questions/36379637/read-dicom-in-c-and-convert-to-opencv
                                https://github.com/marcinwol/dcmtk-basic-example
                                https://itk.org/ITKSoftwareGuide/html/Book2/ITKSoftwareGuide-Book2ch1.html#x7-240001.12
                                https://www.researchgate.net/post/ITK_vs_OpenCV

                              5. Some tips on exporting from Octave to OBJ and STL formats - commonly used in 3D printers - https://alexbjorkholm.wordpress.com/2016/10/07/exporting-3d-objects-from-octave/
                              6. Plotting using slice in Matlab -
                                https://stackoverflow.com/questions/35549733/how-can-i-plot-several-2d-image-in-a-stack-style-in-matlab

                              Ubuntu 18.04 screen issue for File Open dialog boxes, scrollbars [fixed]

                              After a new install of Ubuntu 18.04, had a funny problem where Geany File Open would lead to the application being greyed out, as if the dialog box had opened, but no dialog box being visible on screen. Googling found the answer in this thread, https://ubuntuforums.org/showthread.php?t=2395250

                              Apparently, 18.04 configures 2 screens even when there is only one screen. Fix was to go to Settings and change to one screen.

                              Still there is a problem with Geany's scrollbars - the scrollbars don't redraw after scrolling, leading to the whole scrollbar becoming an orange colour instead of just the orange scrollbar handle. My initial installation was a minimal Ubuntu desktop. Later, installing unity did not help.

                              Edit: This post seems to have the solution - installing a newer version of Geany from the ppa solved the scrollbar issue. 
                              sudo add-apt-repository ppa:geany-dev/ppa 
                              sudo apt-get update 
                              sudo apt-get install geany geany-plugins-common

                              Friday, August 23, 2019

                              Visual Studio specifics of OpenCV project with Spinnaker SDK

                              This is linked to my previous post about the QHY -> Spinnaker implementation, the nitty-gritties of Visual Studio project changes required under Windows.

                              1. The SDK and examples installed by default under Program Files in C:. If Visual Studio opens the examples there, it complains that the directory is not writable. So, make a copy, including the entire spinnaker directory - not just the src folder, since the include paths in the examples are set as ..\..\include and similarly for libs.
                              2. In my case, since I had VS2017, I had to install the VS2015 toolset as mentioned in my previous post, and when opening the solution, choose the "No Upgrade" option for these projects.
                              3. Once these steps were done, I could compile and run the examples. But for my project, I had to create a new project, choose the V140 toolset, add libraries, add include and lib paths, and also prevent errors due to my use of sprintf.


                              4. One way would have been to make a copy of the common properties file which I had made earlier, make modifications to it, and add that to the project. In this case, I just made specific changes to the Release configuration of this particular project.
                              5. Additional include dir in C/C++ -> General, Additional Include Directories would be a better place to add it than in this screenshot.


                              6. Linker -> Input -> Additional Dependencies


                              7. Additional Library Directories.

                              I would probably need to copy the dlls manually, or follow the post build steps mentioned at this earlier post

                              implementing opencv project with Spinnaker SDK for Point Grey FLIR cameras

                              This post documents my learning curve with migrating code written for the SDK for QHY cameras to the Spinnaker SDK for Point Grey FLIR cameras.

                              1. On both Windows and Linux, I had to make changes to my build platform. On Linux, Ubuntu 18.04 was a supported platform, so I installed 18.04 onto a spare partition. On Windows, I was using Visual Studio 2017, and had to install the VS2015 toolset. 
                              2. On Linux, even though the spinnaker examples had large and complex makefiles, the sdk had put the include and lib files in the default locations, so I just had to modify my CMakeLists.txt file minimally -


                              3. On Windows, there was a much more elaborate series of steps, which I will put in a separate post.
                              4. The simpler parts of implementation, like changing exposure time, gain etc could be done in a fairly straight-forward way. Unfortunately, there was no example for continuous capture with live display. ptgrey.com being redirected to flir.com, the examples obtained on googling couldn't be directly found. Finally, found this, which indicated that StreamBuffer handling had to be changed. Looking for this specific information, found the documentation at this page, which mentions its potential use in live view scenarios. Putting that in place, then Begin Acquisition and End Acquisition could be implemented outside the capture loop, and the frame rate improved.  
                              Edit: Here is a dump of the nodes I changed, full code is at https://github.com/hn-88/FDOCT/blob/master/BscanFFTspin.cpp (spinnaker documentation talks about the details of the nodes, and the example code helps).

                              (Gain is in dB)

                              In terms of nodes,
                              AcquisitionControl -> AcquisitionMode -> Continuous
                              AcquisitionControl -> ExposureAuto -> Off
                              AcquisitionControl -> ExposureTime -> 1000
                              AnalogControl -> GainAuto -> Off
                              AnalogControl -> Gain -> 0.0
                              AnalogControl -> Gamma -> 1.0
                              ImageFormatControl -> Width -> 1280
                              ImageFormatControl -> Height -> 960
                              ImageFormatControl -> OffsetX -> 0
                              ImageFormatControl -> OffsetY -> 0
                              ImageFormatControl -> PixelFormat -> Mono8
                              ImageFormatControl -> PixelFormat -> Mono16
                              ImageFormatControl -> AdcBitDepth -> Bit12
                              ImageFormatControl -> AdcBitDepth -> Bit10
                              BufferHandlingControl -> StreamBufferHandlingMode -> NewestOnly

                              Using Displayspin, on my machine, I got
                              71 fps for Bit12 Mono8
                              34 fps for Bit12 Mono16 (converted to Mono8 to display on screen)
                              71 fps for Bit10 Mono8

                              So, probably we can always use Bit12, since it does not seem to have much of a speed penalty.

                              And a small note on a mistake I made - initially I got the error  
                              Spinnaker: Can't clear a camera because something still holds a reference to the camera [-1004].
                              Putting pCam = nullptr; before camList.Clear(); removes this error.




                              Wednesday, August 21, 2019

                              IP address, MTU setting on Linux for FLIR Point Grey GigE cameras

                              It looks like my hardware does not support jumbo frames - https://linuxconfig.org/how-to-enable-jumbo-frames-in-linux

                              Even after setting up a larger MTU with ifconfig,
                              https://www.flir.com/support-center/iis/machine-vision/knowledge-base/lost-ethernet-data-packets-on-linux-systems/
                              the MTU reverted back to 1500. So, probably hardware or driver limited.

                              Also, Network Manager then complains that the interface is not managed,
                              https://askubuntu.com/questions/71159/network-manager-says-device-not-managed

                              But the camera works as long as the IP address is set correctly. According to the knowledge base article at
                              https://www.flir.in/support-center/iis/machine-vision/knowledge-base/setting-an-ip-address-for-a-gige-camera-to-be-recognized-in-linux/

                              setting it anywhere in the 169.254.x.y range with a 255.255.0.0 netmask should do the trick - link local address (LLA).

                              More info about our camera is at
                              https://www.flir.com/products/blackfly-s-gige/?model=BFS-PGE-16S2M-CS

                              https://www.flir.com/support-center/iis/machine-vision/knowledge-base/technical-documentation-bfs-gige/

                              Getting Started Manual which talks about Windows installation
                              Technical Reference which gives frame rates, pin-outs etc

                              Edit - found this page which has a good explanation of the bandwidth used, effect of lower MTU (slightly higher CPU usage), multiple GigE cameras and so on.

                              Sunday, August 18, 2019

                              Remmina, RDP and sound

                              For convenience, I was trying out using Sound Forge and other audio editing software on my laptop, after logging in with a Remote Desktop Connection using Remmina as the client. With the default setting, Sound Forge etc would complain that there is no suitable audio output device. Googling, found that RDP has various options - taking sound to remote client, leaving the sound on local machine, or no sound at all. Remmina default was no sound. Tried bringing sound to local device, but that did not work for DirectSound, since I'm on Linux Mint. Changing it to leave at local, SF etc would work properly.

                              image.png

                              tried ubuntu install from mint

                               https://help.ubuntu.com/lts/installation-guide/i386/apds04.html

                              but apt update after chroot seemed to be taking the values from sylvia.

                              So, simpler to just use a bootable usb to create and install the new ubuntu partition.

                              Wednesday, August 14, 2019

                              moving mailing list to google groups

                              The process begun in Dec 2017 was completed now, in mid-Aug. Transitioning our mailing list with 60k subscribers to google groups. Had to move slowly due to the limits set on directly adding users, 100 per day per group, and only in batches of 10 email ids per addition.

                              Had asked PB to unconfirm users who have been moved to groups instead of unsubscribe, because unsubscribe will send them a mail saying they have been unsubscribed.

                              Points to note: 
                              (a) any user you add will get an email
                              (b) for a strictly mailing list only functionality, remove posting permissions for everyone except owner and manager, under Permissions -> Posting Permissions, like
                              Inline image 1

                              Google groups does handle bounces, and it is advisable to remove ids which bounce -
                              https://productforums.google.com/forum/#!msg/apps/5PCc-uDVowo/S4A4YPnnzXcJ

                              (Automating signup though google docs or google forms or something would be great. An example is given here,
                              https://sites.google.com/site/sandorexampleclub/how-to/step-11
                                PS wrote his own implementation which is on script.google.com as a GSuite script, I think.)

                              Tuesday, August 13, 2019

                              easy 3D volumetric rendering with imagej

                              ImageJ, or its full-featured version Fiji, makes it easy to create volumetric renderings and flythroughs using the 3D Viewer plugin. Plugins are installed by just copying the jar file into the plugins folder, found using locate on my system at ~/.imagej/plugins. Multi-layer tifs or gifs can be used as input. Gimp cannot create multilayer tifs, but it can create gif animations, which are sufficient. Open as layers in Gimp to select multiple layer files, save as animated gif, open in ImageJ.

                              Inside ImageJ, Image -> Scale allows setting of proper scale. Making one axis longer etc. The Image menu also has Crop, Duplicate, Transform and Zoom. Image -> Adjust has Threshold, Brightness/Contrast, etc. Then, Plugins -> 3D Viewer.

                              The 3D Viewer plugin has nice recording features, with a built-in 'Record 360 degree rotation' under its View menu. It opens another Movie window, which can then be exported using ImageJ's File -> Save as and choosing Gif.

                              ezgif.com helps to crop and compress gifs easily.



                              Monday, August 12, 2019

                              creating animated gif or video from Octave

                              My Octave install does not have the campos function, but the latest version seems to have it. So, this could be useful in the future.

                              This post has some good guidelines - using
                              set(0, 'defaultfigurevisible', 'off');
                              to speed up things, and using print to save as png. Then using mencoder etc to create the movie.
                               mencoder mf://output/*.png -mf w=640:h=480:fps=25:type=png -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o output.avi

                              Sunday, August 11, 2019

                              bulk renaming from commandline on linux

                              Using the rename tool,
                              An example is
                              rename 's/\.html$/\.php/' *.html
                              From
                              we see that a normal string substitution would be like
                              s/dog/cat/

                              rename -n 's/SV_2019_08_10_PM_SOME_LONG_NAME/SV_2019_08_10_SHORTNAME/' SV*mp3
                              shows what it will do, and -v will actually do it and also display the result.

                              Tuesday, July 16, 2019

                              using DMCTK with CMake on Linux

                              This stackoverflow post mentions an old version of CMake and making on Windows. More googling led to FindDCMTK on cmake.org  so that I could just edit my CMakelists.txt to read

                              ...snip...
                              find_package(USB-1 REQUIRED)
                              find_package(OpenCV REQUIRED)
                              find_package(DCMTK REQUIRED)

                              include_directories($(USB_1_INCLUDE_DIR))
                              include_directories($(OpenCV_INCLUDE_DIR))
                              include_directories($(DCMTK_INCLUDE_DIRS))

                              ... snip ...
                              target_link_libraries(${FILENAME1} ${LIBUSB_1_LIBRARIES} ${OpenCV_LIBS} ${DCMTK_LIBRARIES} )
                               ... snip ...

                              That worked nicely, without having to manually find the libraries and so on.

                              Tuesday, July 02, 2019

                              mounting a windows smb share and automating file moves

                              mounted //PATH/FOLDER on a linux machine onto a folder /path/to/mountdir using

                              sudo mount -t cifs //192.168.ip.address/FOLDER path/to/mountdir -o uid=usernameofowner,username=username,password=xxxxx

                              A cron job was added to automatically move files every 5 minutes, like
                              */5 * * * * mv /path/to/mountdir/LIVE_*  /path/to/another/dir/

                              But that caused problems, generating hundreds of emails to usernameofowner, saying mv: cannot stat ‘/path/to/mountdir/LIVE_*’: No such file or directory whenever there was no file there to move. So, PB modified the cron by adding
                              > /dev/null 2>&1 
                              which seems to have solved the issue, working well.

                              Wednesday, June 12, 2019

                              income tax filing quirks for assessment year 2019-20

                              This year, I thought of filing ITR1, since I only had short-term capital losses and no gains. But on downloading the pre-filled XML and trying to pre-fill the java utility, it gave an error that the xml is not valid. Then tried ITR2 java utility, with ITR2 prefilled XML. This time it mentioned that the assessment year is wrong. Manually edited 2018 to 2019 in the XML, and it pre-filled successfully.

                              ITR2 needed Basic Salary and DA. So, I computed it from my salary slips, as a percentage of gross salary.

                              Unfortunately, the CG (Capital Gains) section was not working properly in the java utility (AY2019-20 version 1.2). The Mutual Fund section's form was not being displayed, only the instructions were being displayed. So, I did not put in my losses to carry forward to the next year.

                              The tax computation is in the tab Part B TTI. When you click on Save, it does validation. This time, there was also a Submit button, which allows you to directly submit the return from the java utility itself. I did that, and the ITR-V was sent to my email. I chose to e-verify via HDFC bank's netbanking like in this earlier post. The first time I tried, I was logged on to the incometax portal when I started the process, so it did not work. I needed to log out of the incometax portal, and then click on the Income tax e-filing link under Requests in HDFC bank's netbanking. Then it worked. 

                              Wednesday, May 29, 2019

                              ffmpeg commandline for maximum compatibility

                              An old Raspberry Pi was not playing some videos. Looked around for a transcoding option with ffmpeg, baseline x264 seemed fine, and this example worked for me -
                              ffmpeg -i input.avi -c:v libx264 -preset slow -crf 22 -c:a copy -profile:v baseline -level 3.0 output.mp4
                              by looking at

                              Another post has the 2-pass encoding example, where the bitrate is also specified. 

                              Tuesday, May 21, 2019

                              videos for waiting area - redux

                              The original idea of having a marquee which would update as in my previous post turned out to be not very workable. Even if the wifi was not flaky, updating the times etc on a computer in the office would be "too much work" for us in the middle of shows. And the wifi signal was being completely lost due to electrical noise from our solar system exhibit's motor and my room's AC compressor motor. So, the idea was adapted to showing a notice in 3 languages, that the average wait time is 30 minutes, every five minutes or so.

                              Notice made in Google docs and screenshotted, added an "invert" filter scrolling up-down for visibility.

                              The next change was to use the "smart TV" media playback for the two flat screens instead of buying Raspberry Pis for them. Since anyway the content was going to be static, did not make sense to buy. Idea suggested by S.

                              The "smart" TVs would only play a single video file, would need intervention after a single video file was played, would not play playlists. So, strung together all the files into a single file, repeated it for 3 hours so that no intervention would be needed. I had used avidemux, vdub, ffmpeg for doing this, but S more comfortable with VSDC on Windows.

                              Ffmpeg commandline for making files of known size - first calculate required bitrate by size / duration, using kbits and seconds, then something like
                              for half an hour 500 MB, = 500 * 1000 * 8 kbits
                              bitrate = file size / duration
                              =4000000/1800 = 2222 kbps. Let's take as 2200 kbps.
                              so, 2200k

                              ffmpeg -y -i input -c:v libx264 -preset medium -b:v 2200k -pass 1 -c:a libmp3lame -b:a 128k -f mp4 /dev/null && \ 
                              ffmpeg -i input -c:v libx264 -preset medium -b:v 2200k -pass 2 -c:a libmp3lame -b:a 128k output.mp4

                              In the ffmpeg example obtained by googling, it was AAC audio. But on my machine,
                               Unknown encoder 'libfdk_aac'
                              So, used libmp3lame instead.

                              Playing the single file on the Pis had some issues. The older Pi would not play, since it did not support the codec -  x264 High Profile, I think. The other Pi had vignetting issues, the 16x9 videos were appearing too small on the 4x3 TVs. So, the Pis play playlists, and the "smart" TVs play single video files. The older Pi does not play VP8. Files downloaded from Youtube via desktop (MPEG4 codec) were fine, downloaded via mobile by S were VP8.

                              Also learned that the Raspberry Pi overscan controls do not work with VLC in fullscreen mode (on RPi3).

                              Also learned that the Raspberry Pi (2 and 3) video out pinout of the TRRS connector did not have Sleeve as ground! TRRS = L, R, GND, Video! Had to modify the TRRS to RCA cable we bought, inverting GND and VID for one of the RCAs.

                              For getting composite video out, 0 is NTSC and 2 is PAL,
                              https://www.raspberrypi.org/documentation/configuration/config-txt/video.md

                              if only sdtv_out=2 does not work,

                              https://bhavyanshu.me/tutorials/force-raspberry-pi-output-to-composite-video-instead-of-hdmi/03/03/2014/

                              And with the remote for shutting down the Pis, they can be operated completely independent of wifi availability.

                              Sunday, May 12, 2019

                              Veracrypt fix

                              A Veracrypt volume on a removable (NTFS) hard disk was not mounting. Veracrypt was complaining of bad password or bad volume. I thought the password was correct. Finally, did
                              sudo fdisk -l
                              to find the device name, which was /dev/sdc1 for the removable drive in this case, and then
                              sudo ntfsfix /dev/sdc1
                              It finished in less than a second, noting that it had successfully completed. Then, tried mounting with Veracrypt, success!

                              After mounting, the encrypted volume was visible in Nemo with an icon to unmount it. But unmounting it with Nemo does not really unmount it, probably because Veracrypt uses su privileges to mount - you have to unmount (or Dismount) it via the Veracrypt interface. Only then would the removable drive become free of processes and unmountable via Nemo.



                              Thursday, May 02, 2019

                              Octave memory issues and workaround

                              I had an Octave script which was reading data from files into a variable, doing some processing, then reading some different data into the same variable, and so on. For many hundreds of MB of data, using the load function. But Octave was running out of memory, and finally I had to
                              killall -9 octave-gui

                              The workaround was to process part of the data, save the intermediate data to file, exit Octave, reload Octave, resume processing, and so on. Painful, but it worked. 

                              Wednesday, April 24, 2019

                              chapterwise references with LaTEX and biblatex

                              I needed chapterwise references for my thesis. The suggestions at
                              https://tex.stackexchange.com/questions/345163/references-at-the-end-of-each-chapter
                              did not work with the "Cambridge" template on Overleaf.

                              or use TU Delft template which has per chapter references.


                              used that, and found:

                              The chapterbib package and biblatex are incompatible

                              Then got this - 

                              That seemed to work - \begin refsection etc 

                              % Chapter 1
                              \begin{refsection}
                              \chapter{Introduction} % Main chapter title

                              ---snip----

                              % and at the end, where we want the references

                              \renewcommand{\bibname}{References}
                              % 20190501 - HN
                              % added this renewcommand to change  the title Bibliography to References
                              \printbibliography
                              \end{refsection}

                              Tuesday, April 16, 2019

                              mailstore for achiving emails

                              Here is part of an email to a colleague regarding Google Apps email reaching storage limits:

                              Regarding email - currently, our domain does not allow expansion of the existing amount of space. So, the only way is to delete large attachments. In many cases, the attachments are required and cannot be deleted. This can be handled in many ways.

                              1. Forward them themewise to specially created gmail ids and then delete
                              2. archive them locally by storing on a local hard disk and then delete
                              3. automate the local archiving using some tool like
                              https://www.google.com/search?q=imap+archive+tool+open+source

                              I guess you would want to work on Windows? I have not tried any of these tools, but this free Windows tool comes near the top in search results -
                              https://www.mailstore.com/en/products/mailstore-home/

                              And later, when he tried out mailstore, found that it had a specific option to get authorized for GMail - using that option, downloading emails was quite painless, though a long process for many GB of emails.

                              Thursday, April 11, 2019

                              voting in India

                              Today is polling day for us. Both State and National elections being held simultaneously here. Though the printed slips with our electoral roll serial number and polling booth location were given to us from our workplace, it is also available for all as an app,
                              https://play.google.com/store/apps/details?id=com.eci.citizen&hl=en_IN

                              Lots of information is available online as well as on the app, for example, the eligibility criteria and how to register,
                              https://eci.gov.in/faqs/

                              My experience was relatively seamless. Waited for around 20 minutes in line - around 20 people in line in front of me, including elderly ladies who had to be prompted on how to use the machine etc. All done in a harmonious manner. Very civilized. A demo video is at
                              https://eci.gov.in/video-gallery/evm-awareness/know-your-evm-r61/

                              Friday, April 05, 2019

                              Raspberry Pi shutdown using IR remote

                              Initially followed this instructable -

                              Had to make changes because of new version of Raspbian and lirc.

                              1. Added sensor as per this,
                              changing pinout as required by googling
                              2. Installation by
                              sudo apt-get update
                              sudo apt-get install lirc



                              3. Configuring kernel module was different from
                              The only thing we need to do is to uncomment out
                              dtoverlay=lirc-rpi
                              in /boot/config.txt

                              4. Configuration was slightly different from
                              (a) create /etc/lirc/hardware.conf and put in it
                              LOAD_MODULES=true
                              DRIVER="devinput"
                              DEVICE="/dev/lirc0"
                              (b) Placing the remote's configuration file was inside the directory /etc/lirc/lircd.conf.d
                              (c) starting the service was by /etc/init.d/lircd start

                              5. irexec configuration was different from
                              We needed to edit /etc/lirc/irexec.lircrc instead, since we want to shutdown and reboot, editing it to:
                              begin
                                  prog   = irexec
                                  button = KEY_1
                                  config = sudo shutdown -r now
                              end
                              begin
                                  prog   = irexec
                                  button = KEY_0
                                  config = sudo shutdown -h now
                              end


                              6. irrecord was a little tricky -
                              (Here also, instead of service lircd stop, we need to do /etc/init.d/lircd stop)

                              irrecord first wants you to press random keys. Then, you have to give the name of a key, press and hold it, then enter the next key's name when prompted, and so on for all the keys, and then only press ENTER to stop the process. And some keys are not recognized - maybe those should have been pressed during the first phase of random pressing.

                              7.  sudo /etc/init.d/lircd status reports error with duplicate remotes. We need to rename /etc/lircd/lircd.conf.d/devinput.lircd.conf to devinput.lircd.conf.dist
                              sudo mv devinput.lircd.conf devinput.lircd.conf.dist

                              8. sudo /etc/init.d/lircd status reports errors like
                              lircd-0.9.4c[364]: Error: Cannot glob /sys/class/rc/rc0/input[0-9]*/event[0-9]*
                              Then we need to edit /etc/lirc/lirc_options.conf, changing
                              device = devinput
                              to
                              device = default

                              9. sudo /etc/init.d/lircd status reports errors like
                              Error: could not get file information for /dev/lirc0
                              After a lot of searching, found on this page that this could be because the driver lirc-rpi has been replaced with a newer version called gpio-ir, so that the /boot/config.txt line has to be changed from
                              dtoverlay=lirc-rpi
                              to
                              dtoverlay=gpio-ir
                              Did that, and it worked.
                              This is on Raspberry Pi 2 running Raspbian 9 (Stretch) with kernel 4.19.23+, running lirc version 0.9.4c-9 while the other Pi, which worked with dtoverlay=lirc-rpi was running Raspbian 9 (Stretch) with kernel 4.14.98-v7+ and the same lirc version  0.9.4c-9.

                              testing AppImages on various distros - dd iso to USB

                              As mentioned in my previous post, I had created an AppImage for distribution a Linux binary. Testing on 3 distros one generation earlier is recommended using chroot. For example,
                              https://www.pcsuggest.com/setup-a-32-bit-chroot-with-ubuntu-live-cd/

                              But I found the procedure for creating chroots to be more cumbersome than just dumping the Live distro iso into a USB pen drive, booting another machine from USB, and checking the AppImage on the distro.

                              1. Since I was on an Ubuntu-based Mint Linux 18.1, it had a "Startup Disk Creator" GUI app for dumping Ubuntu-like Live CD/DVD to USB devices.

                              2. For Fedora and OpenSuse, I used the following commandline, taking care to note the correct device id like /dev/sdc1 or /dev/sdd1 or whatever using mount,
                              sudo umount /dev/sdd*
                              sudo dd if=Fedora-Workstation-Live-x86_64-24-1.2.iso of=/dev/sdd bs=8M status=progress oflag=direct

                              3. This commandline pattern worked fine for Fedora and OpenSuse, since these isos were meant to work as Live CD/DVDs. So, I did not need to use unetbootin or anything like that for these. Approximately 3 MB per second to my USB drives, which were not very fast. 7-8 minutes to write 1.5 GB. 

                              creating an AppImage to distribute Linux binaries

                              Some notes of my AppImage creation process.


                              1. First tried creating manually, using
                                https://github.com/AppImage/docs.appimage.org/blob/master/source/packaging-guide/manual.rst and the AppRunx86-64 from https://github.com/AppImage/AppImageKit/releases
                              2. Found and copied the libs it uses by doing the following to copy all shared libraries -https://www.commandlinefu.com/commands/view/10238/copy-all-shared-libraries-for-a-binary-to-directory
                                ldd file | grep "=> /" | awk '{print $3}' | xargs -I '{}' cp -v '{}' /destination
                              3. Need to use the environment variable $OWD (Original Working Directory) to get files from the OWD, like our ini file. This code snippet was a good template, https://stackoverflow.com/questions/40306012/c-using-environment-variables-for-paths
                                    char* pPath;
                                    pPath = getenv("PATH");
                                    if (pPath)
                                      std::cout << "Path =" << pPath << std::endl;
                                    return 0;
                              4. Many libs copied with the manual method had to be excluded using the excludelist at
                                https://github.com/AppImage/pkg2appimage/blob/master/excludelist
                              5. Still did not work on newer distros. In this thread, probono recommended using trusty on TravisCI for building, and linuxdeployqt instead of manual creation of AppDir. Did that, and it worked. Far fewer libs are bundled by linuxdeployqt. Lists of the two bundles are available at this issue thread.
                              6. The download link from transfer.sh can be seen if the entire log is downloaded from TravisCI.
                              7. The travis yml file gives the outline of the build and bundling process.