Project Diary - Oterem/moleAgnose GitHub Wiki

Welcome to the moleAgnose wiki!

This section is our personal diray. In this diary we will document our meetings(with and without Assaf), code improvments (based on commits), etc.


The following documentations are written restospective as part of College demand to keep personal diary for the project

Meeting

Date: 12.9.17
Time: 10:00
Location: Assaf's office at the College.
Attendees: Assaf Spanier, Shai Lehmann, Omri Terem.
In this meeting we discused Assaf about the project. Back at that time we didn't know which path to choose - developing an application or perform a research about melanoma. Assaf pointed out the three main tasks in melanoma detection - lesion segmentation, lesion dermoscopic feature extraction and lesion classification. Each task can be a research project by it's own, we needed to think twice which direction we want. We explained Assaf that we have no prgramming experience with Android and computer vision (using OpenCV) but we are willing and highly motivated, and with the right guidness we will be able to succeed. We had concerns regarding a research project, so in the end we decided to try develop an Android application. Assaf asked us to write some code and see if we can handle Android and OpenCV, and after "some" prototype email him back and report our progress.


Date: 18.9.17
New repository was created. For innitial commint we wrote simple Android application for taking/loading pictures from gallery and presenting them for user. We build a basic UI. We started working the mole segment by applying static threshold and finding contours.


Date: 20.9.17
Seprate the Analyze process to other thread by using AsyncTask class. That meant to prevent locking the main gui while the image is process.


Date: 24.9.17
The static threshold was not good enough and we needed a dynamic threshold based on histogram analysis, so we plotted a histogram on demand. In addition we tried to filter the image (reduce noise such as hair and ruler marking inside the picture).


Date: 25.9.17
Fixing some memmory leaks.


Date: 27.9.17
New and cool UI was added. we needed an adaptive blur to reduce "noisy" contours so we deleted contours with small area (>300 and bounded the number of iterations). In addition, we have noticed that in the process of taking pictures, we capture some "noisy" natural elements (all other objects that captured in addition to the user skin) that caused bad segments. At first, we tried to develop a human skin detector but it was too hard because of the huge color ranges of human skin. Then we decided to add an image crop and zoom mechanism to eliminate unwanted objects.


Meeting

Date: 4.10.17
Time: 9:00
Location: Assaf's office at the Hebrew University.
Attendees: Assaf Spanier, Shai Lehmann, Omri Terem.
In this meeting we presented Assaf our progress. Here lis the sum up of that meeting.


Date: 8.10.17
After the metting with Assaf, we need to quantify our results. We used Dice and Jaccard scales to get a result for our code. In order to fast things up, we wrote a script in Python that upload images and analyze them using our code. after that, analying the result images Using Matlab for producing Dice and Jaccard scales. The final report can be found Here


Date: 13.10.17
We have encounted unexpected problem in presenting large images (30 MB for example) in high resolution. After "digging" in "Stack-Overflow" and Adnroid developers forums, we tried the "Glide" library for images and now we can present there images.


Date: 16.10.17
After the meeting with Assaf, we needed an alternative way to segment the mole respectivly to the user skin color. Assaf suggested that we will try to implement the Region Growing algorithm. We learned about few algorithms that can provide us our desired result for region growing algorithm such as watershed, flood fill and grab cut. For start, we needed to sample skin and mole coordinates, and extract the RBG values. after that, use there RBG values to find out the color differences and segement accordingly.


Date: 19.10.17
After the implementation of Region Growing, we got better resuts in segmenting the mole. Full report can found Here.


End of retrospective reports.


Date: 30.10.17
We need to extract mole features such as assymetry,border,color etc. At this point our segment produce a binary image so we used a bitwise_and operation we get the segmented mole in full color to analysis.


Meeting

Date: 1.11.17
Time: 12:30
Location: Assaf's office at the College.
Attendees: Assaf Spanier, Shai Lehmann, Omri Terem.
In this meeting we reported our prgress and decided about the next phase. Full report can be found Here.


Date: 1.11.17
After the meeting, we began to learn about the Chan-Vese algorithm throught Youtube videos and other websites and slidshows on the internet. Unfortunately we didn't understand how this algorithm works and implemented (apparenlty it is related to energy and derivitives). We updated Assaf and a Skype meeting was schedueled for 2.11.17 21:00.


Date: 2.11.17
After the Skype conversation, Assaf will keep investigate about the Chan-Vese algorithm. At the mean time, we will try the Super-Pixel method combining bilateral blur.


Date: 3.11.17
We investigated the SuperPixel algorithm in OpenCV. The good news are that this algorithm exists in OpenCV. The bad news are that this particular algorithm is not part of the built-in libraries (we need the Ximgproc library) and we need to recompile OpenCV with it's extra modules as desctibed Here. At the mean time, no luck with this task..


Date: 5.11.17
Due to difficulties merging the external library Ximgproc, we decided that for the sake of the resarch part in out project, we will try to use Python and segment the images based on what Assaf suggested - Slic and Superpixel. So far we have managed to create a super pixel. Our next goal is to mask all the other unwanted pixels and produce a binary image for testing based on the ISIC data base. An example of what we achieved today is this superpixel:


Date: 14.11.17
So finally we have managed to segment the super pixel. Full report can be found Here


Date: 19.11.17
After the short meeting in Assaf's office on wednesday, we are focusing on extracting mole features. Our goal now is to get numeric values about the mole (roundness, color, assymetry).


Date: 21.11.17
We have succeeded in quantify the "roundess" of a mole. First we needed to find the minimal enclosing circle of the mole. Then comapre that circle to the mole using the built-in function "matchShape()" providing these two shapes. The less the number is, the more similar the shapes. In other words - dangerous moles can be "less round" so we want to detect how "round" is the suspicious mole as part of our diagonse.



Meeting

Date: 4.12.17
Time: 14:30
Location: Dr. Merims's office, Sharet institute of oncology, Hadassah Ein Karem hospital.
Attendees: Dr. Sharon Merims, Shai Lehmann, Omri Terem.
In this meeting we needed to get a medical professional opinion and advices. Full report can be found Here.


Date: 17.12.17
We are now aimting for the asymmetry calculation. For that, we need to splitthe blob into two blobs based on major and minor axis. At that momment when have succeeded rotate the blob using rotation matrix. Exapmle of this rotation:


Date: 18.12.17
After the rotation we are facing some problems - the blob (which is a set of points) does not map according the rotation, meaning we see the rotation, but in fact the points didn't remapped to their new position. We need to remap the point according the rotation in order to continue with the asymmetry calculations.


Meeting

Date: 20.12.17
Time: 11:45
Location: Assaf's office at the College.
Attendees: Dr. Assaf Spanier, Shai Lehmann, Omri Terem.
Full report can be found Here.


Date: 20.12.17
After the meeting we now start to read and learn more about PCA - principal component analysis.


Date: 21.12.17
We are having some issues with the PCA on OpenCV. So far, no success.


Date: 22.12.17
As part of using PCA, the correlation matrix should be an output but we are no certin if this output of correct. We have generated that matrix by ourselves (compuring covariance and correlation), and again we got unexpected results.


Date: 24.12.17
We are trying different approach for getting the mole orientation.


Date: 31.12.17
Finally a partial success - we managed to find the axes of the mole and draw them. The next mission is to divide the mole into two parts based on the X-axis and Y-axis.


Meeting

Date: 10.1.18
Time: 11:00
Location: Assaf's office at the College.
Attendees: Dr. Assaf Spanier, Shai Lehmann, Omri Terem.
Full report can be found Here.


Date: 12.1.18
We have fixed a bug while converting a contour into a list of points - all we did is verify the biggest contour index before converting it to a lisr of points. In addition we added an icon launcher. Next task is to seperate the segmentation and features extraction into two seperate thread to increase performances


Date: 14.1.18
We have seperated the segmentation and the feature extraction tasks into two thread. We used the AsyncTask class in android and implemented other functions according our needs


Date: 1.2.18
We conculded that we need to extract features using web service, because we need a python code for that and it is much easier that using android's opencv. Assaf suggested pythonanywhere and web2py services. We will keep check for these services.


Date: 10.2.18
We will try using AWS for feature extraction. we can use S3 (simple storage service) by amazon, upload an image (in our case, a binary image), and invoke a python code that its input will be that image. The results will return as JSON to the android device.


Date: 25.2.18
We are focusing on writing code for asymmetry calculation. We are trying to divide the image into two parts and them use MSE (mean standard error) and SSIM (structural similarity) to generate a number of asymmetry


Date: 23.3.18
We create a pyhton virtual env zip file from Ec2 (AMAZON LINUX) with OpenCV and imgDiff. after uplode the zip file with some code as lambda function we find out that some OpenCV function dosent work and the imgDiff library doesn't work at all. still unclear if it passible to use lambda function for asymmetry


Date: 14.4.18
After we put alot of efforts to try build an lambda function with OpenCV and imgDiff, We decided to write asymmetry calculation offline code to examine the results offline. We built a python code and script that runs over 2000 pictures and extract the asymmetry results to excel table.


Meeting

Date: 15.4.18
Time: 18:00
Location: Assaf's office at the College.
Attendees: Dr. Assaf Spanier, Shai Lehmann, Omri Terem.
We present the asymmetry results table to assaf and we try to understand a threshold for dangerous asymmetry but we couldn't find clear pattern. because of that we decided with Dr Spanier‏ to examine to use deep learning instead opencv


Date: 20.4.18
We are checking about tensorflow. we are trying to get a working classifier.


Date: 30.4.18
We have managed to create a tensorflow classifier and currently trying to understand how effective it is.


Date: 2.5.18
At the mean time we have a classifer and we want to upload it to the AWS cloud and create a lambda function that its input will be an image uploaded from the android device.


Date: 10.5.18
We are unable to create the aws lambda with the needed packages. so far no luck.


Date: 14.5.18
After reading about it, we are able to upload needed python packages and now we need to set the aws credenatials.


Date: 15.5.18
We have a working aws lambda function that an upload event is its input and can produce a diagnose.


Date: 20.5.18
Our main work is to improve the UI. working on it.


Date: 27.5.18
We have built a new complete redesign UI based on Material design language.

Now we need to connect the functionality to the new design and to add new abilities for the Application such as write the diagnose on the picture as water mark and save it to device memory and use it to show the diagnosis history of the user


Date: 3.6.18
We finished to Integrate the open Camera/Gallery and AWS functionality to the new UI.
Now when the user get diagnose the app write it to the picture as water mark and save it to user phone memory for history. and we add a lot new features:

  1. Popup diagnose dialog.
  2. Functionality to tell about us button.
  3. Important links page.
  4. History page with animated gallery view.
  5. Important links page get updates from aws.

Date: 10.6.18
We configured S3 buckets to auto delete objects (images and JSONs)

⚠️ **GitHub.com Fallback** ⚠️