Profile image of nathanburles
@nathanburles
Member since May, 2008
0 Recommendations

Nathan Burles 

### BIO I have loved computing and programming for many years, and in 2010 I achieved a first-class honours in a four year Master of Engineering degree in Computer Systems and Software Engineering. I am now working towards a PhD in Neural Computation. Over the years I have become proficient in many languages including both desktop (C, C++, Ada, Prolog, Java, Python, Scheme) and web-based (PHP, MySQL, JavaScript, AJAX, and of course HTML+CSS). During my PhD I have taught undergraduate level classes (for example using C, Ada, Java, Python, Scheme, and web technologies). I have been using Drupal since 2008, initially transitioning a Drupal 5 website to Drupal 6 (which required porting some of the contributed modules to be compatible with 6). Since then I have built many sites with Drupal 6, and now Drupal 7 -- creating themes; contributing core patches; integrating, extending, and maintaining contributed modules; and creating custom modules where necessary. My portfolio includes a number of neighborhood council websites from Los Angeles (used to manage all of their events, meetings, and more), as well as extending to various other unrelated sites. I have submitted patches to , the most recent having been included in Drupal : ### Areas of Expertise Operating systems: * Linux * Windows (all variants including server) Programming languages: * C / C++ * Java * PHP / MySQL / CSS / HTML / JavaScript / AJAX * Basic / Visual Basic * Ada * Python Content management systems: * Drupal 5 * Drupal 6 * Drupal 7
$20 USD/hr
8 reviews
5.2
  • 89%Jobs Completed
  • 100%On Budget
  • 100%On Time
  • 33%Repeat Hire Rate

Recent Reviews

Education

1st Class with Honours, Master of Engineering in Computer Systems and Software Engineering

2005 - 2010 (5 years)

Publications

Full Implementation of an Estimation of Distribution Algorithm on a GPU

An implementation of an Estimation of Distribution Algorithm (specifically a variant of the Bayesian Optimisation Algorithm) using GPGPU. Winner of the GECCO GPGPU Challenge 2011 ().

A rule chaining architecture using a correlation matrix memory

This paper describes an architecture based on superimposed distributed representations and distributed associative memories which is capable of performing rule chaining. The use of a distributed representation allows the system to utilise memory efficiently, and the use of superposition reduces the time complexity of a tree search to O(d), where d is the depth of the tree. Our experimental results show that the architecture is capable of rule chaining effectively, but that further investigation is needed to

Improving the Associative Rule Chaining Architecture

This paper describes improvements to the rule chaining architecture presented in [1]. The architecture uses distributed associative memories to allow the system to utilise memory efficiently, and superimposed distributed representations in order to reduce the time complexity of a tree search to O(d), where d is the depth of the tree. This new work reduces the memory required by the architecture, and can also further reduce the time complexity. [1] A Rule Chaining Architecture Using a Correlation Matrix M

Extending the Associative Rule Chaining Architecture for Multiple Arity Rules

The Associative Rule Chaining Architecture uses distributed associative memories and superimposed distributed representations in order to perform rule chaining efficiently [1]. Previous work has focused on rules with only a single antecedent, in this work we extend the architecture to work with multiple-arity rules and show that it continues to operate effectively. [1] A Rule Chaining Architecture Using a Correlation Matrix Memory, 2012

ENAMeL: A Language for Binary Correlation Matrix Memories

Despite their relative simplicity, correlation matrix memories (CMMs) are an active area of research, as they are able to be integrated into more complex architectures such as the Associative Rule Chaining Architecture (ARCA) [1]. In this architecture, CMMs are used effectively in order to reduce the time complexity of a tree search from O(b^d) to O(d)—where b is the branching factor and d is the depth of the tree. This paper introduces the Extended Neural Associative Memory Language (ENAMeL)—a domain speci