Monthly Archives: April 2007

April Forex Results

I’ve uploaded my April 2007 $100 Experiment Forex trading results and despite the rough patch I had in the middle of the month, I saw a 5.15% growth in trading equity. My expectancy recovered as well but it’s still reeling from my disastrous position sizing debacle! My expectancy for April came in at $0.15.

In April I took 35 trades, 25 of them were winners and 10 losers. My average win was $0.48 vs my average loss of $0.68.

I’m holding a total of four positions going into May, 2 in EURUSD and 2 in AUDUSD. When I close them out, I’ll detail them in May’s results sheet. I must make an effort, as part of my neural net analysis for May, to pinpoint the price where I must exit a trade in the event the trend reverses. This will hopefully allow me to truly quantify my Reward vs Risk ratios.

[tags] Currencies, Euro, Dollar, Forex, Trading [/tags]

$100 Forex Experiment – Profits Galore

Friday was a damn good day trading Forex! I made over $2 in profits from the GBPUSD currency pair alone (roughly 1.8%). I was elated to make such a good profit but I must remain humble. I can’t let my ego or euphoria get a hold of me or else I will get slaughtered. The same goes for emotions of greed, I must remain zen like and take profits when I see them.

I’m posting a chart so you can see how my how my GBPUSD trade worked out in the end; I sold darn near the top (luck)! I’m still holding my EURUSD position and entered a new one in AUDUSD Friday afternoon. The trend analysis keeps telling me to be short the USD and long pretty much everything else (Yen not included).

gbpusd-042807.JPG

I’ll be posting my April 2007 Forex results probably on Tuesday and you can follow along (or audit my results) on this new page I created.

[tags]Forex, Results, Experiment, $100, Currency, Markets, Trading, Money[/tags]

Event Driven Analysis

On Friday morning I caught Wallstrip’s chat with Tim Wolters, of Collective Intellect, who uses statistical models to extract knowledge from unstructured data sources. I really enjoyed this episode because it highlights how you can use data mining to create Event Driven Analysis (EDA). Coincidently, I had beers with the Market Doctor last night where he explained to me that part of his PhD thesis was based on EDA. Well that just opened up about an hour of technical discussion as we downed our favorite brews.

Its surprisingly easy to build a rudimentary model and evaluate press releases, earnings announcements, and other key fundamental data relative to the noise of the market. I’m quite interested in following up on EDA and have decided to build a “test” model after I finish writing and posting my YALE Lessons. I’ll probably test the earnings and announcement releases of one or two companies (maybe competitors) against the S&P500 and see what I find.

[tags]Event, Driven, Analysis, Data-mining, Statistics, Business-intelligence, BI, AI, Neural-nets[/tags]

Forex Trading Plan

Does anybody remember the Seinfeld episode where George Costanza started doing everything opposite? His life turned around and he women, fame, and fortune. Well I had George Costanza moment yesterday (ed. Repost from Digital Breakfast) . I’m trying a new currency trading strategy that does a few things opposite of what I did before. Here’s the revised and stripped down Forex trading plan.

  1. Concentrate on five major pairs as defined by Oanda.com. They are:
    • EUR/USD
    • AUD/USD
    • EUR/JPY
    • GBP/USD
    • GBP/JPY
    • USD/JPY
  2. Watch the economic calendar for upcoming landmines! Fundamentals drive this market.
  3. Position sizing has changed back to a % of NAV model. Use 15% of NAV as the initial position size. Scale the % down as the account gets larger. Ultimate % of Nav target is 3%.
  4. Use only 10 DMA Bollinger Bands on the 1 hour charts.
  5. Use my neural net models to confirm trends in the EUR, USD, JPY, and GBP.
  6. Don’t use stop losses. Sounds scary doesn’t it?

Smaller multiple entries will help me “dollar cost” average out losses. The trick is to trade with the long term trend, buy on dips, let positions mature, and slowly scale out profitable positions over time. If fundamentals were to change (bank rate hikes, drops, etc) then re-evaluate trends.

[tags]Forex, Trading, Plan, Currencies[/tags]

$100 Forex Experiment

Good Morning. Yesterday’s currency sell-off triggered several of my buy orders as the EURUSD and GBPUSD pairs dropped through support after support.  The end result was that I started to pyramid losses. However, I stuck to my trading plan and planned for a long wild ride. In case any new readers are wondering, I will (re)post my trading plan on this blog later today.

Finally, overnight the pairs started to firm up and now my GBPUSD pair is showing a nice profit as it heads back to the magic $2.00 level. The same with my EURUSD position but it’s still underwater at bit. I have confidence that both pairs will climb back to their highs because the my nerual trend models continue to show a strong Euro and Pound relative to the dollar. In other words the Euro and Pound’s trend remains up. In fact the signal coming from the Pound is a “STRONG UP!”

GBPUSD-042707

Don’t forget to enter my contest, you have a shot at winning $100!

[tags]currency, forex, euro, pound, dollar, trading, plan, $100, contest[/tags]

Building an AI financial market model – Lesson III

Welcome back! In this lesson we’ll explain how to build your first YALE experiment. Building the experiment takes just a few mouse clicks but how you put the pieces together requires a little thought.

Although this lesson isn’t hard, you have to logically build your YALE experiment and put all their “operators” in the right order. If you don’t then your experiment will crash, or worse, give you weird results. As I said in Lesson I, YALE has a very steep learning curve and I spent many hours pulling out my hair trying to figure out how to make it work.

A basic YALE experiment consists of the following operators:

  1. The root
  2. A data loader,
  3. A data visualizer
  4. A data validator
  5. A model creator
  6. A model writer
  7. A model applier
  8. A performance evaluator
  9. Final Experiment

The Root

Every YALE experiment has a Root operator. The Root operator is just where your experiments starts out. Your entire YALE experiment will “roll up” to this operator in the end.

Now, click on the YALE icon and load up the program and select “Blank Experiment.” Once you did that your screen should look like this:

YALE Root

Data Loader

A blank YALE experiment won’t do us any good, we have to fill it up with operators to help us analyze data and build a model! Before we can build a model, we need to load in our data. Without the data we can’t analyze it and build a model, duh!

YALE requires you to have a data loading operator in your experiment “tree”. Since we are using an Excel spreadsheet we need to use an Excel compatible data loader.

Do the following steps: Right click the Root Operator > Select New Operator > IO > Examples > ExcelExampleSource.

YALE Data Loader

You should then see the ExcelExampleSource operator directly below your Root operator

Data Visualizer

Once you’ve selected your data loader, you want to add another operator that will let you manipulate and visualize your data.

Tip: This operator is not necessary but I find it useful to see patterns in the data. Skip this step if you want too.

Once again do the following steps: Right click the Root Operator > Select New Operator > Visualization > ExampleVisualizer.

YALE Data Visualizer

Once you click the ExampleVisualizer you’ll see it directly below your ExcelExampleSource operator. When you run the YALE experiment, the data gets loaded in and the ExampleVisualizer lets you click on any data visualizations YALE creates for you for more information. I like this operator a lot because it lets me find specific dates in the data charts when I see anything out of the ordinary.

Data Validator

The next step, and very important one, is to transform your data into something called training and validation sets. When you build a model, you build it from training data. You essentially train the model to learn the relationships in your input data to explain the output data. The validation set is used to test the trained data to make sure it makes sense. YALE has an operator that automatically splits up your data randomly into training and validation sets and then feeds those sets into the learning algorithm.

To find the Data Validation operator, do the following steps: Right click the Root operator, select New Operator > Validation > XValidation.

YALE Data Validator

Model Creator

Next we’ll add in the model creator, also known as the learning algorithm. Before we can place the learning algorithm, we have to create a “split” in the experiment. We’re doing this because we’ll use the same XValidation data set (training and validation) to build the model and test its performance later.

To do this we have to use something called an Operator Chain. An Operator Chain, in my dictionary, is nothing more than a fancy name for a branch in your experiment tree.

To find the Operator Chain do the following steps: First, Right click on the XValidation operator > Select New Operator > Operator Chain.

YALE Operator Chain

Now, the step we’ve all been waiting for! We’re going to place the learning algorithm into our experiment. YALE has several different algorithms available for you to use. Some are regression based, others use machine learning, and the one we’ll use is a classification algorithm.

A classification algorithm takes your data and classifies your output data into categories based on your input. Huh? This simply means that the algorithm takes your data and groups it into similar categories (this very handy when modeling trends because you want to find emerging trends before everyone else does).

When you use prediction data (when you want to see if the trend has changed) against your model, YALE will look for similar patterns and then give you the output signal (UP, DOWN, RANGE, etc) based on the categories your model learned.

The learning algorithm we’ll use is a classifier called “IBk” and to install it into your experiment you have to Right click the Operator Chain > Select New Operator > Learner > Lazy > IBk.

YALE Model Creator

Model Writer

Now, after your model is learned, you want to write it to a file, this way you can load it anytime you want and test new data against it.

To install the Model Writer, right click your Operator Chain > New Operator > IO > Models > ModelWriter

YALE Model Writer

Great! The model portion of the experiment is now done! If you really wanted to, you could learn a model right now but how would you know if its any good? We need some performance measures so we can determine if our model is good enough to make some predictions! That’s handled in the Model Applier and Performance Evaluator section, see below!

Model Applier

Once again we need to create a branch in our experiment to handle the performance evaluation. To do that, we’ll need to split off in the same place as before; we’ll split off from the XValidation operator. Follow the steps in the Data Validation section to install a new Operator Chain.

Right click on the XValidation operator > Select New Operator > Operator Chain.

YALE Operator Chain

Once you did that, we have to install an operator that will take your newly learned model and apply it to the XValidation data sets. This operator is called, surprisingly a Model Applier.

Right click on the second Operator Chain > Select New Operator > ModelApplier

YALE Model Applier

Performance Evaluator

The last operator you’ll need is the Performance Evaluator. This operator gives you the option to see the model’s prediction accuracy, correlation, squared correlation, and a host of other performance measures. This let’s you know, right off the bat, if your model is any good. I would never build a YALE experiment without some sort of performance evaluation measures!

YALE Performance Evaluator

Final Experiment

If you’ve followed along and did everything correctly, your final experiment framework should look like the image below. If it doesn’t, then please go back and make fixes. Any deviation from this framework could produce experiment errors which causes baldness.

YALE Final Experiment

There you have it, you’ve built your first YALE experiment! Of course you can cheat and just download the XML file here (in zip format): Gold XML

In Lesson IV we’ll cover setting your preferences and running the model for the first time. In Lesson V, we’ll cover how to interpret the results and check out some of the data visualization capabilities YALE has.

Thanks so much for attending the Neural Market Trends University! Please feel free to leave me a comment if you have any questions.

Currency Relationships

I was inspired to generate three data graphs from my currency models for my readers after reading Maoxian’s post on the British Pound. These graphs show price relationships between the Euro and the US Dollar Index, the Euro and the British Pound, and finally the Euro and the Swiss Franc.

Euro/USD Index

EuroUSD Chart


Euro/British Pound

EURGBP Chart

Euro/Swiss Franc

EURCHF Chart

With the exception of the Euro/British Pound, the price relationship between the Euro, USD Index, and the Swiss Franc is pretty much linearly correlated. :)

[tags]Currencies, Trends, FOREX, Franc, Euro, Pound[/tags]

Building an AI financial market model – Lesson II

Alright class, let’s get started! Before we begin with today’s lesson, we should review the four key questions we asked about building an AI financial market model in Lesson I, they were:

  1. Do I have the right software?
  2. What do I want to model? What are my input and output variables?
  3. Do I have the financial data to model with?
  4. Is the data in the right format?

Hopefully you’ve answered all these questions by now and have downloaded and installed YALE. If not, I suggest you review Lesson I before continuing with this lesson.

In this lesson we’ll do two important things:

  1. Review the input variable spreadsheet
  2. Build the output variable (trend)

Please feel free to email me or post a comment if you have questions or need clarification on anything covered in today’s lesson.

Review the input variable spreadsheet

For those who have decided to use our lesson example (Gold trend), and have downloaded the GA-Gold.xls, please open up the spreadsheet. Once you’ve opened the file, you should see something like this.

GA-Gold Example

The GA-Gold.xls spreadsheet should contain 8 columns, 1 for the date, and 7 others for the input variables we identified in Lesson I. If you remember back to Lesson I, we discussed the relationship between input and output variables. The input variables is the data that drives your output variable; in this example the Gold trend. You’ll notice that our spreadsheet only contains input variables and no output variables. How are we going to get our output? What do we do now? The simple answer is, we roll up our sleeves and build the trend!

Build the output variable (trend)

For this portion of the lesson you’ll need to find a chart of your asset or security and identify the UP, DOWN, or RANGE trend area. For the lesson example, just download a chart of Gold prices from 1997 through today and look at the areas where Gold is in a DOWN, UP, or RANGE trend.

Tip: Make sure that the chart you use covers the data range you plan model. It’s best to use a slightly larger range to help you inspect the chart visually and not miss any key information at the edges.

Gold Lesson Example II

 

Now comes the hardest part of this exercise, add a column called “GC Trend” to the GA-Gold.xls and enter the words UP, DOWN, or RANGE in the GC column as it relates to the date shown on the Gold chart and on the spreadsheet. What you are doing is building the output variable of the model and when we run the AI analysis, the software will find relationships between your inputs and the output to generate UP, DOWN, or RANGE signals when you test the model later with new data.

Tip: You can add other words like STRONG UP or BUY if you like. It depends on what words you want to use to describe the trend.

When you’re finished, your spreadsheet should look something like this:

Gold Example Final Input

 

Now, save your spreadsheet under another file name. This new spreadsheet becomes the input data you will build your trend model from. Of course you can skip all this hard work and download the lesson example spreadsheet: Gold Final Input.

Tip: Save this spreadsheet! Every 3 months or so, you should more input data to this spreadsheet and re-optimize your model. This is important because the market changes all the time and new patterns and relationships emerge. All you have to do is add UP, DOWN, or RANGE to GC Trend column.

Congratulations! Your hard work is now complete! The work gets easier from here on but we still have to cover a lot of ground! The next step will be to build the AI experiment, load in your spreadsheet, and create a model! We’ll tackle this in Lesson III and Lesson IV.

Digital Breakfast Status

This is an update about my Digital Breakfast site. My attempts to fix the site have been fruitless. I’m afraid I have to abandon the site and figure out another way to extract my posts from the database. Until that happens, I will continue to redirect my readers from Digital Breakfast to this new site, www.neuralmarkettrends.com.

Thanks for understanding and sticking by me. I really appreciate it.