This is my first blog
Week 14 2020 06 05
Acknowledgements
I would like to extend my thanks to the following, who have made this project a possibility:
Dr David Ashton, my workplace supervisor in PFR for providing me with this opportunity, his time and his support. Without Ashton help, this project would not have been a possibility.
To Jacky for all his time and patience throughout my project helping me to setup and solve the problems.
Dr Todd Cochrane, my NMIT supervisor for helping to plan my work during the work placement and the write up of the report.
Special thanks to all the staff at Nelson Marlborough Institute of Technology for their kind guidance and support.
Last but not least, to my friends and family for their unwavering support and encouragement.
Week 13 2020 05 29
Performance of the fish eye extraction module and Future improvements.
New images, provided by David, were used for the fish eye extraction performance. So in total there were two sets of data tested. Final version of the fish eye extraction module performance is good. It has a 100 % successful fish eye extraction rate on the first trevally and snapper dataset and around 70 % on the second trevally and snapper dataset.
Fish eye extraction module was tested by a second set of images to estimate the fish eye extraction performance which had a good extraction success rate of 84%. The result showed a success rate of 78% (7 out of 9 ) in snapper image and 90% (9 out of 10) in trevally images.
Future improvements:
Further improvement of the fish eye extraction module could be focusing on modifying the module to complete all the exact fish eye in the second set of images.
The result images for the second set of images showed that several possible areas in the module could be improved. First, the only one trevally image showed an incorrect contour could indicate an adjustment of threshold value required. Second, the snapper image showed an incorrect contour in the middle of the fish suggested that an additional contour filter could be used to filter out contour outside the fish head area.
Week 12 2020 05 22
Reference management software: Zotero
Zotero is a very powerful reference management software to organize reference data such as research material and bibliographic.
In my report, I have used so many references in different sources such as google scholar and web pages. Zotero is so use friendly. To add a reference to my reference library, I just need to click one on the ‘Save to Zoero ‘ icon located on google extension bar. It is very convenient.
In the Zotero software GUI, it provides a good management system to separate references into different folders by your personal preference. So I can find my previous work easily.
As I am adding my reference to the report, some web page references do not have any reference data and the title is too long. And I found out that in the info tab of the reference, we can change the title and date of the reference to fill our desirable format.
So, I am able to organize different types of references well in my report!
Week 11 2020 05 15
I have writing up the report recently and faced several questions such as how to organize the report to let the reader understand what I did during the work placement.
After finished first few sessions of the report, I show it to my supervisor Todd and he gave me advice of how to organize it a better way.
- Such as using a correct writing style.
- Prove an introduction of each session to let read know the brief idea of that session.
These advice help me a lot in writing up the report and speed up the writing process.
However, I am still not so familiar with the referencing tool, Zotero, for referencing web pages. I need to have a detail study next week to make the reference clear and organize.
Week 10 2020 05 08
I have completed the outline of the project report and started to write up the report.
The project report included 9 sections as below:
Contents
1 Introduction
2 Background
3 Placement Plan
4 Placement Report
5 Placement Evaluation
6 Conclusion
7 References
8 Tables and Figures
9 Appendices
At the same time, my supervisor prepared another data set for me to test the performance of the module I wrote.
Week 9 2020 05 01
After finished version 3 of the fish eye pipeline, it is time to write up a documation of the pipeline.
The documentation I added is an implementation section that includes 4 parts and added to the readme of the repo. This can help users know how to setup and run the eye detection module.
Implementation include 4 parts as below:
· brief description of how to set up your local environment
· describe the basic structure of the repo (e.g. pipeline to module level)
· describe the basic data folder structure (e.g. input/workspace)
· example command line arguments
After completing the documentation, I have committed and pushed to the Github branch.
Week 7-8 2020 04 27
Pipeline version 3
On version 2 the pipeline able to extract contour for all trevally images, however, only works on snapper imager with light colour.
To overcome this problem and get the eye contour, I have tried two a approach:
- Adjust the contrast of the image and perform edge detection.
- convert the image to HLS and perform binary thresholding.
Result of using contrast correction method
First, compare the different of the image before and after contrast correction, under human eye, the contour is much easier to identify.

However, after testing of canny edge detection with different threshold values combination, the result is not satisfied and not able to get the eye contour. The image below is the output image of edge detection using contrast correction image. The outline of the eye can be identified. However, it also induces too much noise to the result. Different technique has been applied to remove or reduce these noises, but the result is still not good.

Result of using HLS filtering method
After detail study of the data image, converted the image to hue colorspace model to perform filtering can effectively identify the fisheye area. The image below shows different hue around fisheye area. The fisheye and the reflection area have a high hue value, around 130 to 150. So, using this method can extract a rough outline of the fisheye, and apply some polishing method, a fairly good eye contour can be extracted.

After testing of these two methods, I selected the second one, HLS filtering method, to extract the dark colour fish image and merge it to version 2 pipeline. The basic idea is perform both the version 2 pipeline and HLS filtering method, then merge the contour result together and perform circularity and size filtering to select the contour as the final output result. The image below is the from HLS filtering method, the outline in not so smooth but still able to get the eye contour.

Using fish eye extractor pipeline version 3, all 15 trevally and 13 out of 15 snapper were able to extract fish eye contour. The only two failed images was due to the image quality, the fish eye area was burred in the original image. this could due to the fish movement during picturing time.
Reference:
https://docs.opencv.org/trunk/da/d22/tutorial_py_canny.html
https://docs.opencv.org/3.4/da/d97/tutorial_threshold_inRange.html
Week 6 2020 04 17
I try to run my fish eye extraction pipeline on all trevally photos and only one-third of them working well. The reason could be the difference in background color and the fish have different patterns of color near the head area.
There are 15 Trevally’s photos in my data set and can be divided into three sub-sets:
- Small fish with a blue background.
- Small fish with a red background.
- Big fish with a blue background.
The eye extraction function can outline the contour of the dark area of the fish eye and calculate the center point of the eye. It can also draw the contour and the center point on the preview image.

However, in this stage, the function only works on the first sub-set of the trevally images, first 5 images.
The rest of the image is failed to identify the eye like this:

To identify the problem, I try to check each step of my function and generate all the images in each step.
The outline of the function:
It works as the following step(outline):
- Do Grayscale
- Do Histogram correction
- Threshold image to binary
- Extract contours
- Filter round contours
- Filter to get the largest circle
I found out that the binary image using 110 as the parameter produce two problems:
- It identified a larger dark area above the eye
- The eye is not circular

Compare with the original image, I guess the reflection on the eye causes a lighter blacker and filter out during the binary thresholding of the image.
So, I am trying to focus on the Histogram correction step and try to make all the background of the image to light color and see how it works.
Week 4
After the completion of the first version of the fish eye pipeline, I have get access of more data such as trevally, snapper images with different size and color background.
However, because of the covid-19, PFR lockdown. I am not able to work in the facility and all of us need to prepare work distance at home.
My supervisor David suggest me to work at home and he will figure a way to make it work.
And all my work, code and note, is on the PFR computer and as a student, the policy does not allow me to take the company’s computer back home. So, I need to set up all the things again on my PC to continue my project. It includes installing Python and PyCharm, install GitHub and download the repo I am working on.
At later this week David will send me a subset of the fish photo which includes different stages(age) of the fish on two different species which are snapper and trevally.
Finally, I got everything set up as before, now I can keep working on my project again. With the great help form David and Jacky, I have a subset of the fish photo and get the program from GitHub.
Thank you all for the help from David and Jacky!!