Skip to content
Snippets Groups Projects

Update of all included codes

Merged Ryan Rautenbach requested to merge Ryan_updated_codes into master
18 files
+ 144
409
Compare changes
  • Side-by-side
  • Inline
Files
18
### This code is used to evaluate the trajectory of the lagrangian particles, it used the CSRT tracker
### certain wait keys that can be found throughout the code are used for locating problems in the analysis
### i the first section the code loads all necessary packages followed by the definition of the directories
# -*- coding: utf-8 -*-
"""
Created on Tue Aug 9 13:50:45 2022
Sebi möchte mehr Doku hier...
@author: ryanr
"""
@@ -23,14 +23,16 @@ duration_stop = 200
freq = 800 #Hz
manual=False
manual=False # change the variable here to true for manual detection of ROI
### here input the directories for where the raw images (ONLY THE IMAGES) are present and then the directory where the results are to be saved respectively
### images should be in Tiff format, if the format is different you must make sure that all images in the directory are the same format to keep the respective order and never save into the same directory as the raw data directory must only have the raw untracked images in it!!
trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
path_to_images = r"E:/0_Master_thesis_RMR/04_Exper_RAW/40mm/EQ-40/EQ-40_6.31kg/EQ-40_6.31kg_free/trial_3/RAW_Tiff/"
path_to_output = r"E:/0_Master_thesis_RMR/04_Exper_RAW/40mm/EQ-40/EQ-40_6.31kg/EQ-40_6.31kg_free/trial_3/output/velocity and trajectory/"
### selection of tracker type here
def createTrackerByName(trackerType):
# Create a tracker based on tracker name
@@ -59,13 +61,13 @@ def createTrackerByName(trackerType):
return tracker
# Set video to load
# Set video to load, here instead of images a video could be loaded
#videoPath = "videos/run.mp4"
# Create a video capture object to read videos
# Create a video capture object to read videos, this essentially sets the start for the tracking over the images based on the order the images are sorted in within your directory
list_file=os.listdir(path_to_images)
frame = cv2.imread(path_to_images+list_file[0])
#frame=cv2.resize(frame, (960,540))
#frame=cv2.resize(frame, (960,540)) # if your frame is out of shape or you want to manually locate the ROI adjust the frame here
frame_width,frame_height,_=frame.shape
@@ -75,6 +77,10 @@ colors = []
# OpenCV's selectROI function doesn't work for selecting multiple objects in Python
# So we will call this function in a loop till we are done selecting all objects
# here is where the automated detection is introduced based on the pixle gradient which has to be adjusted based on the images and size of ROI to improve the ability of the algorithm
# set the gradient size in the contour funktion based on the amount of pixel for the respective surface
# adjustments are required at times based on the images as regarding gradients on the edges of experimental containersand suroundings of an images, as this can hamper the detection based on multiple ROI that can be detected
# adjust this in the frame[] size, tracking can still be conducted outside of the frame however this eases the detection to be in one specific area for the best ability to detect the LPs evenly and in a reproducible manner
if manual ==False:
Beep(freq, duration_start)
im=cv2.cvtColor(frame[60:1600,60:1600],cv2.COLOR_BGR2GRAY)
@@ -123,8 +129,10 @@ y_p =[]
#BKG=cv2.imread(path_to_background+'background.png')
o=0
cn=0
# variable to also save and create a video of the images with tracking, change the number based on the speed of the video that is liked
out = cv2.VideoWriter(path_to_output+'outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
# this section is where the difference between the ROI in one image is determined compared to the next image
for j in list_file:
if j.endswith('bmp')or j.endswith('tiff'):
@@ -173,14 +181,14 @@ for j in list_file:
#just uncommenting the first one gives end view, uncommenting all gives frame by frame
#o+=1
#quit on ESC button
# quit on ESC button when the tracking window is shown and open
if cv2.waitKey(1) & 0xFF == 27: # Esc pressed
break
trajectory = np.asarray(trajectory)
### important here to normalise the respective ROI based on the pixel to cm ratio
### important here to normalise the respective ROI based on the pixel to cm ratio and based on the recording frequency adjust the trajectory calculations here
velocity = np.gradient(trajectory, 1/20, axis=0)
intig = np.trapz(velocity,x=None, dx=20, axis=0)
@@ -188,13 +196,14 @@ cv2.destroyAllWindows()
velocity = np.divide(velocity, 34.65)
trajectory = np.divide(trajectory, 34.65)
### just a help to know when the code is done for long runs
### just a help to know when the code is done for long tracking runs
Beep(freq, duration_stop)
Beep(freq, duration_stop)
Beep(freq, duration_stop)
### here the results are saved
### here the results are saved, saves are generally best as numpy files
### all relevant trajectory data is seperated into the positional and velocity results along a single axis, make sure to adjust the names as needed, CSV saves are not recomended
np.save(path_to_output+'particle_trajectory', trajectory)
Loading