Thursday, January 17, 2008

Status Report for 01/17/2008

Here is the outline of the document I had mentioned in my previous post. This is actually a thesis chapter titled "High-performance design features, measurements and analysis".

The chapter focuses on high-performance design features and analysis in map rendering from heterogeneous distributed scientific geo-data.

The chapter first presents the common performance issues in interoperable Service-oriented Geographic Information Systems (Chapter 6.1), and then provides baseline performance test results obtained from GIS systems developed with conventional approaches (Chapter 6.2).

The proposed performance design features and corresponding measurements/analysis (Chapter 6.3) are grouped into two. One is focusing on XML encoded (GML common data model) data transfer and rendering (Chapter 6.3.1), and other is focusing on design issues for implementing of caching and parallel processing techniques at federator (Chapter 6.3.2).

Here is the snapshot of outline of the chapter. Please click on it to get better view.

Thursday, January 10, 2008

Status Report for 01/09/2008

1. I have improved the document titled "High-performance design features, performance and measurements". I have put more thought and more detailed performance analysing. I have also worked on coding in parallel to documenting.
I will be submitting the document soon.

2. I have installed multiple Web Feature Services, Databases and Naradabrokering nodes on gridfarm machines from gf12 to 19. I have redone all the tests given before in Chapter 1.3.2 in the document mentined in 1. Since they are machines of much better performance than the gf-1 to 8 machines, I got really good performance results.

3. I have developed a new algorithm for assigning the worker nodes for parallel processing. The partitiones coming from the query decomposition are assigned to the worker nodes in round-robin fashion.

According to algorithm:
PN: #of partitions
WN: #of worker nodes
share: base(PN/WN) :number of partition each worker node is supposed to get
rmg: WN-share : if there is no remaining coming from the devision it is 0.
first rmg #of WN are assigned share+1 number of partitions, and
remaining are assigned share number of partitions.