Greatings my fine group of unfortunate indivituals whos task it is to continue what we have started. In this zip file you will find n scripts for processing drone videos with openPIV and handeling the data produced. to run these scripts you will need to install OpenPIV, openCV (opencv-python), tkinter (used by OpenPIV, so dont worry about ui if you are running headless), and Numpy for python3 in whatever ide/environment you like. If you are on linux, run installReq.sh to install all required libraries. Personaly, we did all of our running of the scripts via bash terminal on linux server, and we highly recomend doing so if you can find one to use. If not, get access to HPC and figure out how to use that. The first and most important script is downsample.py this script will take your video file and downsample it to create a much smaller file. While you dont do this for your final data, it makes processing the data much faster and allows for rapid prototyping of scripts. Once you have a script that works, run it on the non downsampled footage. You can run this script in 2 ways, first (and best) is to call it from command line and provide it with 2 arguments, first the raw video, and second the path to the video you want to export. This must be .avi due to the xvid encoder, if you want .mp4 thats up to you to figure out how to do. There are also 2 optional paramiters -f and -s, these are integer downsample constants, -f is the frame divisor and -s is the scale divisor. so for instance you have a 4k video at 60 fps called video.mp4. the command `downsample.py video.mp4 video-d.avi -f 2 -s 2` will create a 1080p 30 fps video video-d.avi. if -f was 3 it would have been 20 fps, and if -s was 1 it would have still been 4k. The second way is to call the function directly from another python script, just include the file and call vector_visualizer(vidIn,vidOut,frameSampleRate,frameSampleSize) same as calling it from cmd, but in python This script took about 10 minutes or so to run on our server using 1 core. This next script is a beast, you only want to run it once on your raw footage. Figure out any preprocessing steps you want to do before with downsampled data and then run this on the preprocessed video. The script is video_piv.py and there are 3 ways to run it. 1st you can run it from cmd line, it takes up to 4 command line arguments, they are in order, the video to process, the diretory to export grayscale the frames to, the directory to export the piv data to, and the directory to export the masked piv data to. You only need to give it the first argument (the video), if you do so the other 3 will default to ProcessedFrames_Out, OpenPIV_Out, and Mask_out respectivly in the cwd. you can change these on line 156-158 The 2nd way to run this script is to run it with no arguments, if you do you will be prompted for the video to process and the directories, all except the video can be sciped by just entering a blank line and will default if so. If you want to run with all defaults use argument -d instead of others, the video default is on line 162. The last way to run this script is from python. include the file and run video_piv(videoInputPath, frameOutputPath, pivOutputPath,maskRemoveOutputPath), the last 3 args are optional and will default to values defined on line 136 you can also run the frame extractor and the video piv sepperatly by calling the functions individualy. extractFrames(videoIn,frameOutputPath), and processFrames(frameRate,frameCount,frameOutputPath,pivOutputPath,maskRemoveOutputPath) extractFrames takes about 20 minutes for us and is single threaded processFrames on the other hand would take more than a week to run if single threaded, there is a constant on line 9 for the number of treads to use. set this to the number of cpu's you want to be used for piv processing. For us we had a 48 core server which brought our processing time down to 4 hours for a 4k30fps footage and 8 hours for 4k60fps footage. The time take is roughly equal to the frames*pixels/cores/1000000 we did most of our processing on downsampled 1080p15 fps footage which still takes 17 minutes to run. All cores will use 100% cpu while running. Additinaly this process can generate upwards of 100GB of data (for 5 minutes at 4k60 150GB are created, 100GB of exported frames, 30GB of raw data, and 20GB of masked data) ###WARNING VIDEO_PIV CAN TAKE MULTIPLE DAYS TO RUN AND GENERATE UPWARDS OF 100GB OF DATA### While running this script tends to throw division by zero in a log warnings, not sure why, but they dont seem to break anything note that this script outputs in pixels per second From here there are 3 scripts you can run in any order. video_bathymetry.py reverse_batymetry.py and vector_visualier.py Vector_visulaizer.py will overlay the velocity vectors on each frame of the video and make a new video. This is usefull for testing filters or just varifying that the piv data looks correct can ran in 3 ways, cmd and python and prompt, you are use to this by now both cmd and python take the video in, path to piv files, and file to export (avi) and prompt will just prompt you for them instead. Aditionaly there is a -n option (numbers peramiter) that will export a field of numbers instead of arrows. these can be difficult to read but provide a good way to manualy get a velocity at a point. Note that this script does not change units so numbers are most likely in pixels per second, not meters per second. reverse_bathymetry.py averages the velocity at each point and exports a tsv point cloud in format x y d where d is depth. this also exports average velocity at each point. 3 ways to execute blah, blah, blah, it takes the path to the piv data, a path to export the average file to, a path for the depth file, and a pixels per meter (peter) conversion factor. everything except the ppm is optional and defaults to the same stuff as the other files. pixel per meter is very important because gravity is in meters per second so without it you dont have even a guess of pixels deep. That all being said, this script only makes nice looking point clouds, the bathymetry equation is only accurate on the crest of waves, and the averaging gets alot of data from the troughs so the depths can be up to 8x of. However, they are consistantly off by this same multiplier for the whole cloud, so if you know where the ground should be, you can find the constant and correct for it. video_batymetry.py is script that does the same thing as reverse_bathymetry but it calculaes the bathymetry each frame based on the current average and makes a video of it. it is quite beautiful to watch and is how we figure that 1 minute 30 is required for good sand bar detection. However, it needs manual color pallette tweeking each run an does not export a file. same 3 methods to run, it takes the piv data and a location to export the video to, due note that its export size and scale are hard coded to our 4k30 at 50 meters video right now, just change the last line if you want to use a different video. Also the video might look pore quality, but that is because the piv data is sampled in a 32 spaced grid in your video, so there are only about 100x50 to 200x100 data poitns anyway, this is much much less than hd, so the video is max rez for the data given. Other than that there is 1 script depthFrame.py that takes depth data as a dictionary of depths keyed with xy data, the point spacing (probiably 32) and the number of points in a row and colunm. it returns an numby array of bgr data that openCV can save as a png. this image is colorcoded by depth. This script is designed to be used from video_bathymetry There is also lense_correction.py which will theoreticaly take a matrix and a video and fix the lense distortion, but for us it just caused a black hole so we havent put much work into it. If you want you can try and get it working, but it currently is a bit held together with digital ductape. Lastly the remaining 3 sripts average_velocities.py, display_vectors.py, and histograms.py where inherited from the prevous group (same with the base of video_piv.py, but we changed that one so much you wouldnt even recognize it) we dont really know what average_velocites.py does, it seems to average all the velocities visible in a frame? maybe? histograms.py does some sorta stats analasys on the file, duno why. And display_vectors.py displays the velocity vectors on a white background... for each frame, one after the other... all 18000 of them. each in their own window. Basically its a worse version of vector_visualizer.py and infact inspired us to create vector_visualizer.py And that is all the scripts. Next to some advice. for ground truth data we used a series of poles planted in the surfzone marked with black and white bands exactly 10cm wide and watched by a shore cam. This worked well, we recomend >1080p for the shore camera and put it on the pier not the shore. If you use these poles, make sure that 1/3rd of the pole is burried in the sand and that the bottom most tape is level with the sand. The sand arround the base will be erroded as time passes so we recomend surveying the poles emediatly and check them between drone flights. You will be tempted to not use 1/3rd the length for short poles in shallow water, do it any way, these will wash away. Shoot drone footage at 4k30 or lower. 60 fps just seems to make noisy footage, 15 fps seems quite nice to work with and take about an hour to piv process. 1080p might be a bit low rez and it can be hard to see details in. Spend a day with the drone in the lab calibrating the camera until you can export a video of correctly corrected data, we didnt do this and our distortion data was off and we couldnt fix it later. Do this before all preprocessing if you want to do this If you want to stabilize the footage, we recomend using 2 point video tracking stabilizer in hitfilm 4 or Adobe After Effects (AE is better, but hitfilm is free) You want to use the 2 furthest appart points you can, we tried with 2 shore targets but the end of the pier shook arround lke crazy Place 3/4 ground controll targets, 1 fixed (0,0) target in a corner, and 2 other variable targets in the diagonal corners of the footage. Survey in these targets. Use the corner targets for video stabilization and determining scale At the start and end of collecting video, survey a point on the exact edge of the water, a water zero as it where. this is usefull for correcting the gps data later. Remember to Call Jason if you have any questions. Sincerly, those one dudes who left you this mess to sort out