Tuesday, 22 July 2014

PhD: Visual Analysis in Population Imaging Research (VAnPIRe)

It's been a long time since I posted something on the blog!
I've been quite busy in the last three months because I applied for a PhD position at the Delft university of technology


This is the description of the position which I applied for:
The PhD/PostDoc position will be part of the Population Imaging Genetics project (stw-imagene) that involves linking observations on the human genome to observations in imaging data. Novel, genome-wide sequencing approaches combined with large-scale population imaging studies open up unprecedented possibilities for discovering the cause of a disease and relating it to its anatomical and functional consequences.
The exact nature of the features (markers) that have the highest correlation with the clinical outcomes under study is by definition hard to predict. Due to the magnitude and heterogeneity of the data, as well as the nonspecific nature of the features that are being sought, this is a complex and laborious process.
We envision a new class of visual analysis techniques that enable the structured and interactive visual exploration of population imaging data. With these techniques, patterns and high-potential hypotheses can be flexibly derived from population imaging data, aiding both the derivation of insight and the search for predictive data features.
The main aim of this project is to develop and evaluate a new, interactive visual analysis approach that enables the extraction of patterns and high-potential hypotheses from the irregular and complex population imaging research data.
New insights into the mechanisms behind the clinical outcome of a population can be extracted by augmenting the human visual system with interactive visualization and coupled feature extraction techniques.
After some Skype interview and a visit to Delft they have chosen me for the job! :)
I'm really excited and I'm looking forward for the opportunity to work in the computer graphics and visualization group of one of the best university in Europe!


Monday, 12 May 2014

OpenWorm: A Digital Organism In Your Browser

The kickstarter founding campaing is coming to an end for the OpenWorm project.



You may ask, "what is OpenWorm?".
"OpenWorm is an open science project aimed at building the first digital organism, a microscopic worm called C. elegans. With this KickStarter we want you to join us in contributing to the creation of the first digital organism and we want to give you your own WormSim to play with on your web browser. Your WormSim will improve over time and will get smarter as our model gets better and closer to the real one."

"Science still does not understand the human brain to the level needed to cure diseases such as Alzheimer’s and Parkinson’s. Believe it or not, science doesn’t even fully understand the brain of a simple worm! If we cannot build a computer model of a worm, the most studied organism in all of biology, we don’t stand a chance to understand something as complex as the human brain."

I found this project really interesting and all the code is available on GitHub! (I'm actually planning to contribute to it). It's a very cool mix of different technologies and programming languages, from OpenCL for the GPGPU to OpenGL, WebGL and ThreeJS for the visualization with a lot of C++/Java/Python in the middle.



With less than a week left for the founding campaign and more than 80k$ to be pledged we have to spread the word as widely as possible!

Sunday, 27 April 2014

New book on my shelves: "Data Visualization - Principles and Practice"

I have bought the book "Data Visualization: Principles and Practice" in order to improve my knowledge in scientific data visualization



The book has not many reviews on Amazon website. Despite that, they are quite good and the table of contents seems interesting to me.

In particular these are the topics that I find stimulating:
  • Vector visualization
  • Tensor visualization
  • Multivariate data visualization
Unfortunately, neither networks nor graph visualization technique are described in the book. I could find a lot of material on this particular topic (maybe too much) and it will be the subject for my next blog post.



Wednesday, 26 March 2014

An Artificial Intelligence for the 2048 game

Lately the 2048 game reached a good notoriety on the internet.


A discussion about possible algorithms which solve the 2048 game arose on StackOverflow.

The main discussed algorithms are:

The solution I propose is very simple and easy to implement. Although, it has reached the score of 131040. Several benchmarks of the algorithm performances are presented.


Algorithm

Heuristic scoring algorithm

The assumption on which my algorithm is based is rather simple: if you want to achieve higher score, the board must be kept as tidy as possible. In particular, the optimal setup is given by a linear and monotonic decreasing order of the tile values.

This intuition will give you also the upper bound for a tile value: $2^{n} \rightarrow 2^{16} = 65536$ where $n$ is the number of tile on the board. (There's a possibility to reach the 131072 tile if the 4-tile is randomly generated instead of the 2-tile when needed)

Two possible ways of organizing the board are shown in the following images



To enforce the ordination of the tiles in a monotonic decreasing order, the score si computed as the sum of the linearized values on the board multiplied by the values of a geometric sequence with common ratio $r<1$ .

$p_n \in Path_{0 \cdots N-1}$

$score = \sum_{n=0}^{N-1} value(p_n) * r^n$

Several linear path could be evaluated at once, the final score will be the maximum score of any path.

Decision rule

The decision rule implemented is not quite smart, the code in Python is presented here:

An implementation of the minmax or the Expectiminimax will surely improve the algorithm. Obviously a more
sophisticated decision rule will slow down the algorithm and it will require some time to be implemented.I will try a minimax implementation in the near future. (stay tuned)

Benchmark


  • T1 - 121 tests - 8 different paths - $r = 0.125$ 
  • T2 - 122 tests - 8-different paths - $r = 0.25$ 
  • T3 - 132 tests - 8-different paths - $r = 0.5$
  • T4 - 211 tests - 2-different paths - $r = 0.125$
  • T5 - 274 tests - 2-different paths - $r = 0.25$
  • T6 - 211 tests - 2-different paths - $r = 0.5$



In case of T2, four tests in ten generate the 4096 tile with an average score of $\sim 42000$

Code

The code can be found on GiHub at the following link: https://github.com/Nicola17/term2048-AI
It is based on term2048 and it's written in Python. I will implement a more efficient version in C++ as soon as possible.

StackOverflow

You can upvote my answer on StackOverflow here :)

Monday, 17 March 2014

Polygon mesh data structure

My first tests with OpenGL use a triangle soup which is basically a collection of triangles with no relationship whatsoever. In order to implement a more complex shader I need to work on a more complex and powerful data structure: the polygon mesh
A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles, quadrilaterals or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes.
[Wiki Polygon Mesh]

There's a variety of ways to represent a polygon mesh and they basically differ in how the vertex and topology information are stored. One of the most used representation in the filed computer graphics and geometry processing is the halfedge data structure.
Polygonal meshes consist of geometry (vertices) and topology (edges, faces). Data structure for polygonal meshes mainly differ in the way they store the topology information. While face-based structures lack the explicit representation of edges, and edge-based structures loose efficiency because of their missing orientation of edges, halfedge-based structures overcome this disadvantages. The halfedges (that result in splitting the edges in two oriented parts) store the main connectivity information:
Intuitively, this provides a natural and simple way to represent vertices, edges and faces, as well as arbitrary polygons. 
[OpenMesh]
Many operations and queries are very natural on the halfedge data structure, in particular the circulation on the neighbors of vertices and faces.
Traversal of one-ring neighbors of a center vertex. From left to right: 
  1. Start from center vertex. 
  2. Select outgoing halfedge (and access its target vertex). 
  3. Move to previous halfedge. 
  4. Move to opposite halfedge (access its target vertex)

As far as i know the high-quality publicly C++ libraries available are:
  1. CGAL
  2. OpenMesh
  3. SurfaceMesh
  4. CGAL Logo
  5. Mesquite
I have never used Mesquite. This is mainly focused on the mesh optimization [BFKLM03]. Although the CGAL is a very powerful library for the compuational geometry, its polygonal mesh implementation has a major drawback: all the custom properties associated with a mesh entity must be declared at compile time.

Aachen University
Computer GraphicsLogo
OpenMesh and SurfaceMesh are more flexible and much easier to use than CGAL. OpenMesh was originally developed at the Aachen University [BSBK02] and the version 3.0 was recently released. On the other hand SurfaceMesh is more recent [SB11] and it was developed at the Bielefeld university.
They have very much in common, starting with one of their developers, Mario Botsch who is also the head of the Computer Graphics & Geometry Processing Group at Bielefeld.

Based on our experience in academic research and teaching as well as in industrial cooperations, our primary design goal [of the SurfaceMesh] is ease of use. An easy-to-use data structure is learned faster, allows to focus on the main problem (instead of on the details of the data structure), and fosters code exchange between academic or industrial research partners. The data structure should therefore be just as flexible and generic as needed, but should otherwise be free of unnecessary switches and parameters. At the same time, however, we have to make sure not to compromise computational performance and memory consumption. Otherwise the data structure would be easy to use, but not useful, and hence would probably not be used at all.
[SB11]
The SurfaceMesh primary design goal is the ease of use, therefore it satisfy the requirements for the OpenGL-Sandbox very well, therefore I will adopt it as a polygonal mesh data structure in my program.

In the end I'll show you an example of the SurfaceMesh which computes the mean valence of the mesh vertices

References

[SB11] Design, Implementation, and Evaluation of the Surface_mesh Data Structure
[BFKLM03] The Mesquite mesh quality improvement toolkit
[BSBK02] OpenMesh – a generic and efficient polygon mesh data structure
[OpenMesh]
[Wiki Polygon Mesh]
[CGAL]
[Mesquite]

Friday, 14 March 2014

OpenGL Sandbox - My first shader

Here it is the rendering of the Utah Teapot and the Stanford Dragon rendered with my first shader. 



It's a very simple shader and it does not create any light source in the scene. The depth perception is given by the coloration of the triangles. As you can see in the teapot image, each triangle is colored using a blend of the axes color given by the triangle's normal vector.

The shader's code is straightforward:


Here below the code which passes the vertex attributes to the shader is presented. Qt helper classes are a good choice in order to write quickly a more readable code. For this reason I will use this approach whenever it's possible

Monday, 10 March 2014

OpenGL Sandbox - New data and camera handling


I am working quite hard on the OpenGL Sandbox: you can find the repository on GitHub.


I've included some data of the Standord Scannin Repository's into the list of the available 3D models,
the "Stanford bunny" is rather important in the field of computer graphics and deserves a special mention: If you are intrested in it's history, here you can read something about it.


Furthermore, I have also included a very flexible logging system which grants me the possibility to know exactly what happening inside my program.

The other important thing that I've done is to create my own camera handling framework. The original idea was to implement an FPS like camera, although this set up turned out to be rather unconfortable.Therefore, I redesigned to work in the following way.


The camera points at a fixed point $\bar{C_l}$ in the space, and it can rotate on a sphere centered in $\bar{C_l}$ with a radius $C_r$.
You can move the camera on that sphere with a left-click and drag of your mouse. The radius $C_r$ can be modified with the mouse wheel.

You can also move the point $\bar{C_l}$ around and you might use your keyboard to do that. With W/S keys you can shift $\bar{C_l}$ on the Z-axis, A/S keys you can shift on the X-axis and with R/F on the Y-axis. I'm sorry to disappoint you but the R key does not involve any reload operation ;)

At the moment I'm striving to start to work on the shading, I need to add just a few modification in the application and then I can start working on it.

Tuesday, 25 February 2014

My first OpenGL application

After a 2-years work in the field of 3D reconstruction, computational geometry and 3D modeling I've realized that I need to learn also a little bit of rendering as well. During my free time I have watched several online video lectures and started the online course "Interactive 3D Graphics" on Udacity.

I've decided to use OpenGL as rendering API, because it is multiplatform and easily extendable to the Web application thanks to WebGL and Three.js

The first thing to do is to buy the Red Book of OpenGl.Another useful way to start is the video lecture, available in the SIGGRAPH University program, called "An Introduction to OpenGL Programming".

After some reading and watching I've created my first application. Although I just started from the "hellogl" test I alredy reorganized the code in order to make it easier to add new tests and so on.



The next step is to create the "Hello world" test that could only be the rendering of the Utah Teapot!


PS: for the lazy people here's the video lecture that I mentioned before

Saturday, 8 February 2014

Version control of OpenOffice/LibreOffice documents using git and Python

All my collegues in R&D department love Latex! Unfortunately it's not the same for commercial office people.

This is the reason why we write our manuals and documentations with LibreOffice. After the pain of a big revision on one of our manuals I've decided to introduce a version control also for the documentation as well.

Thus I wrote down a python script that generate automatically a Git-Friendly version of the ODT files.

Here's the link to the GitHub project and to a test project

If you launch the script with th -h option you will find how to use it:
odt_git_helper.py -h
usage: odt_git_helper.py [-h] [-v] [-c | -e] Directory ODT_File
Extract/Create an odt file in/from a directory.
positional arguments:
  Directory       target/destination directory
  ODT_File        target/destination odt file
optional arguments:
  -h, --help      show this help message and exit
  -v, --verbose   increase output verbosity
  -c, --compress  compress into an .odt file
  -e, --extract   extract odt file into a directory (default)

Monday, 20 January 2014

Single nm linewidth measurement

My girlfriend Marijke Scotuzzi is a PhD candidate at the Delft university of technology.

Her main goal is to deposit small structures in the sub-10nm range.

She deposited some lines next to each other and she needs to calculate the width of those lines: the figure below shows an example of those lines. You may noticethat it is affected by an heavy high frequency noise.
Furthermore the lines can hardly be distinguished from the back spots which lies on the surface around them.

I've tried to help her out by writing a C++ program to enhance the original image.
Luckily I can make some assumpions on the input image: the lines are always vertical an they cross the top and the bottom limit edge of the image.

I've developed the following algorithm:
  1. Apply a median blur to the image (optional) \[Image[j][i] = medianBlur(Image[j][i])\]
  2. Compute a per-column weight \[W[i] = \sum_{j=0}^J MedianImage[j][i]^2\]
  3. Apply a bilateral filter on vector W (optional) \[W[i] = bilateralFilter(W[i])\]
  4. Compute the enhanced image \[EnhancedImage[j][i] = Image[j][i]*\min((satO+W[i])*satM,255) \]

Here below some results are shown for a couple of images:

Input

Output

Median blur
Weights
Bilateral weights