I found on youtube a podcast on Machine learning by Professor Andrew Ng from Stanford university.
There are 20 lectures in that series on youtube.
Monday, 29 June 2009
Sunday, 28 June 2009
debug in Netbeans
JCAT has 3 different methods to run. I tried Debug of the JCAT code on my Netbeans 6.5 and it seems it works very slowly. As JCAT doesn't come as a netbeans project I used the free form to import it and I'm not sure if that cause the debug to be slow and whether I had in creating the project to configure something with the debugging.
Saturday, 27 June 2009
count down and data analyse
only 12 days left to the opening shot. so far I've run several simulations related to the fees. I struggled to analyse the logs generated by JCAT. I used Microsoft Excel and NeoOffice the equivalent of OpenOffice for Mac and couldn't find an easy way to process efficient way to process the huge amount of data. I was advised to try Pivot table that the term in Excel and Data pilot in NeoOffice. I don't master that feature yet, but indeed it makes life simpler.
In order to analyse the effect of different variables, I kept all static, but one of them, adjust that variable to 5-7 different values and compared. I'm not going to write my outcome here, but I can tell that best not to keep fees fixed during the whole duration of the game and they need to be adjusted.
Running the game on my computer I observed that it runs on different speeds, depends on the computer processor when selecting the CallBasedInfrastructureImpl in the config file, there are also different configurations for the clock that I haven't yet checked the difference in the API. It's possible to simultaneously multiple games on one machine, just had to change the params file to write the log to different files and when running ant using the parameter -Dparams to run the game with different params file.
In order to analyse the effect of different variables, I kept all static, but one of them, adjust that variable to 5-7 different values and compared. I'm not going to write my outcome here, but I can tell that best not to keep fees fixed during the whole duration of the game and they need to be adjusted.
Running the game on my computer I observed that it runs on different speeds, depends on the computer processor when selecting the CallBasedInfrastructureImpl in the config file, there are also different configurations for the clock that I haven't yet checked the difference in the API. It's possible to simultaneously multiple games on one machine, just had to change the params file to write the log to different files and when running ant using the parameter -Dparams to run the game with different params file.
Labels:
config,
configuration,
jcat,
log,
neooffice,
openoffice,
tournament
Saturday, 20 June 2009
Explorations
I noticed that transactions in my experiments even though tested it with one market isn't 1, it means for some reasons there are shouts that fail. i chose equal number of sellers and buyers and they all chose that market. That also effected the profit, but as it's a single market game we got full score.
I didn't pay attention much to configuration of traders in tests until I read the paper Some preliminary results on competition between markets for automated traders which can make difference even to the issue I discussed above.
I didn't pay attention much to configuration of traders in tests until I read the paper Some preliminary results on competition between markets for automated traders which can make difference even to the issue I discussed above.
Thursday, 18 June 2009
Experiments
Ideas for ideal strategies might arise? well I thought so, but soon I found I need to run experiments and explore how different strategies when tuning their parameters affect the scoring.
I ran several experiments of 10 identical specialists and it was interesting to find big deviation in the score between the best score to the worst. highest was 30.2 and worst was 20.4 in one game. I noticed the deviation remains similar, but spread over different specialist on different games. I ran 3 iterations in a game, so perhaps sometime the number of iteration needs to be high? well I'm not so sure yet about these results.
I ran several experiments of 10 identical specialists and it was interesting to find big deviation in the score between the best score to the worst. highest was 30.2 and worst was 20.4 in one game. I noticed the deviation remains similar, but spread over different specialist on different games. I ran 3 iterations in a game, so perhaps sometime the number of iteration needs to be high? well I'm not so sure yet about these results.
Tuesday, 16 June 2009
Java version and Netbeans
This project isn't about Java, but IDE makes life easier, but it's a nightmare to configure it properly.
The jCAT requires a minimum of Java version 1.5 I tried to use the newest version which is version 6, but I got incompatible error when trying to run only the server. I made sure in the properties I change it to 1.5 and rebuild the project. This error is usually arise when using class compiled with newer Java version, perhaps it's something to do with the version of Java Ant uses as I had the problem both running within the Netbeans by right click on build.xml and select Run Target and selecting the Server or doing it from a shell (Mac) using Ant.
The jCAT requires a minimum of Java version 1.5 I tried to use the newest version which is version 6, but I got incompatible error when trying to run only the server. I made sure in the properties I change it to 1.5 and rebuild the project. This error is usually arise when using class compiled with newer Java version, perhaps it's something to do with the version of Java Ant uses as I had the problem both running within the Netbeans by right click on build.xml and select Run Target and selecting the Server or doing it from a shell (Mac) using Ant.
Sunday, 14 June 2009
Trial starts tomorrow
In the last week I spent most of time setting up my computer to participate in the trial. by using the command ant specialist from the terminal (i run it on Mac), I create a jar file to run the specialist as a stand alone application and using params file I can configure my specialist. I'm not sure if I explained before what is specialist, so the specialist is the market that compete against other specialists and the one that gain the highest score win the tournament.
The CAT package has 3 running modes.
1. everything is running in a single process, this one is used to test my specialist against other specialist on my machine, I don't believe it's possible to use it if I got previous years implementation specialists as they are given complied without the code, hence there's option 2.
2. run different specialists and traders in different threads, it's much faster than option 3.
3. run in different processes and talk through the CATP protocol so theoretical can be on different machines. it slow down running of the game and used mostly during competition as specialist run from different machines in different part of the world.
I checked last week different ways how to use Statistical learning and Reinforcement learning for my specialist. I still in early stage here and try to find out which part of the process in bidding during a trading day I should insert the learning. This is something I'll do in the next 2 weeks.
I also found out that running specialists configured with the delivered CAT when running 2 specialist against the winning strategy of last year which is PersianCat lose all the way from day 1 or 2, as traders sometime select randomly a specialist, but fix it the next day as probably PersianCat offered better fees. I need to find out in the logs how I can get previous test results (fees).
The CAT package has 3 running modes.
1. everything is running in a single process, this one is used to test my specialist against other specialist on my machine, I don't believe it's possible to use it if I got previous years implementation specialists as they are given complied without the code, hence there's option 2.
2. run different specialists and traders in different threads, it's much faster than option 3.
3. run in different processes and talk through the CATP protocol so theoretical can be on different machines. it slow down running of the game and used mostly during competition as specialist run from different machines in different part of the world.
I checked last week different ways how to use Statistical learning and Reinforcement learning for my specialist. I still in early stage here and try to find out which part of the process in bidding during a trading day I should insert the learning. This is something I'll do in the next 2 weeks.
I also found out that running specialists configured with the delivered CAT when running 2 specialist against the winning strategy of last year which is PersianCat lose all the way from day 1 or 2, as traders sometime select randomly a specialist, but fix it the next day as probably PersianCat offered better fees. I need to find out in the logs how I can get previous test results (fees).
Sunday, 7 June 2009
Trial starts on the 15th June
In the week that starts on the 15th June, A trial of the tournament will be held. The purpose of the trial is to test whether the contesters can successfully connect to the server and participate.
This constraint forces me to focus on make sure that my specialist can connect to the game trial server at Liverpool university.
The document JCAT: The Software Platform for CAT Games was updated on the 1st May, one of the additions is explanation about the parameters of JCAT platform that enable to configure the specialist client from outside the code.
For the trial I'll run the game server and the specialist client on different processes. I might need to run additional specialists and traders for testing purpose.
My time plan hence is changed and until the 15th I will work on making sure I manage to connect to the game server. In case I manage to finish it before the 15th I should start work on selecting a strategy to implement. I read in AI book about different learning techniques and I'll choose the most suitable learning technique and implement it.
After the trial week I'll have 1-2 days to select which learning technique I'll implement, a week to implement it, 2-3 days to plan and build a test plan to compare my strategy to previous year participates and 2-3 days to run the tests. Which brings me to a week before the cometition.
This constraint forces me to focus on make sure that my specialist can connect to the game trial server at Liverpool university.
The document JCAT: The Software Platform for CAT Games was updated on the 1st May, one of the additions is explanation about the parameters of JCAT platform that enable to configure the specialist client from outside the code.
For the trial I'll run the game server and the specialist client on different processes. I might need to run additional specialists and traders for testing purpose.
My time plan hence is changed and until the 15th I will work on making sure I manage to connect to the game server. In case I manage to finish it before the 15th I should start work on selecting a strategy to implement. I read in AI book about different learning techniques and I'll choose the most suitable learning technique and implement it.
After the trial week I'll have 1-2 days to select which learning technique I'll implement, a week to implement it, 2-3 days to plan and build a test plan to compare my strategy to previous year participates and 2-3 days to run the tests. Which brings me to a week before the cometition.
Subscribe to:
Posts (Atom)