Thursday, 30 May 2013

Simple multi-platform build

Over the last couple of weeks I've finally been able to get back into the code that I had developed over two years ago.

One of the first tasks has been to connect the rudimentary SSP Agent code to some kind of cross platform UI toolkit so that I can start building a UI to help test the on-going development. In the end I've picked wxWidgets as it seems to have the basic features I require without making the build process too complex. The other obvious contender was of course Qt which is very tempting, but involves a more complex build process. Later down the line, when I get to the stage of building the UI engine for Semprola itself, then maybe I will need to revisit this decision, but for now wxWidgets is totally adequate.

I have also extended my Eclipse based build setup so that I can compile for Linux (Ubuntu 12) as well as for Win32 and Mac OS X from the same codebase. Just today I have successfully completed a very basic build of the SSPAgent as a static library linked into extremely simple wxWidgets application on all three platforms.

So, now I'm finally ready to start some new development of the actual codebase!

Thursday, 17 January 2013

It's been two years!

Wow, I've just seen that it was exactly two years ago that I last posted anything to this blog. This year my focus will be shifting back to this research project. Main main goal for 2013 is to release an early beta of Semprola that other researchers could play with. I'll post more about my plans as they develop.

Sunday, 16 January 2011

The SP Graph is not for humans

I was contacted recently by Yuriy Guskov who has written a critique of semantic web and semantic programming.

Obviously there are many details that Yuriy raises that it would be interesting to discuss, such as about the nature of references and their context and how SP intends to define contexts, some of which will become clearer when I put together an updated version of the 2008 paper and publish more details about how I am actually implementing SP.

However, there is one general comment that I would like to make just to be sure that a key aspect of SP is sufficiently clear. There is no expectation that humans will ever 'read' the SP Graph directly. The SP Graph is intended to represent semantic relationships and information as accurately as possible for the SP Agents, not for humans. That is why the SP Graph is constructed out of unique IDs, not out of words or text.

In the same way that humans use OpenOffice to read .odt files (viewing the data as a WYSIWYG formatted text file, not a large block of XML), so too humans will need to use SP Agents to read and interact with the SP Graph. Language is a very useful way for humans to interact with each other and with recorded information, but human languages are often very ambiguous with a large amount of unspoken context. Computers (internally) require a much higher degree of semantic precision in how their data is organised so that their deterministic syntactical manipulations of that data is semantically correct.

In many ways, therefore, SP can be seen as a step away from many of the current approaches to artificial intelligence, indeed it is not an attempt to create artificial intelligence. Rather, SP is an attempt to improve the way that humans directly program computers.

Friday, 10 September 2010

Time for flex and bison

A couple of weeks ago Phil lent me a book in which the creators of various programming languages are interviewed (Masterminds of Programming). I've only read a few of the interviews so far, but these have included the creators of C++, BASIC, Python and Forth. On the whole it's been fascinating reading and very relevant to this research project.

Fortunately, one theme that seems to be repeated is the idea that in general it's a 'good thing' for people to experiment with creating new programming languages. This of course is very encouraging for me to read, especially coming from these particular people.

As well as being full of discussion about generic issues that are useful for creating, disseminating, evolving and standardising a programming language, there have also been a few very specific bits of advice that I will try to follow, one of which is to use flex and bison when building your compiler.

Flex and bison are tools that help generate fast and correct C code for the first two steps of any text compilation process, namely: lexical analysis and grammar parsing. To date I've been hand crafting the C code for these two steps of my Textual Semprola compiler. I've known about flex and bison for a while, but always thought the cost of the learning curve and the code re-write were not quite worth it.

However, I'm about to go through a phase where I considerably extend and tinker with the syntax for Textual Semprola and so now I need to re-write this part of the code anyway. As I'm also reading more people sing the praises of flex and bison I've decided that this time I should push through the learning curve and start using these powerful little tools. The great thing is that once I've made the switch it should be easier to experiment with the syntax in the future.

One other thing to mention is that I've recently started some more client work. While this will no doubt delay the progress of this research project it does also help fund it!

Thursday, 5 August 2010

The demise of Google's Wave is a stark reminder of just how hard it can be to launch a new technology idea:

http://www.theregister.co.uk/2010/08/04/google_wave_dead/

A key take home lesson for me is to try to make the success of SP less dependent on the network effect. Hopefully SP can be more like the mobile phone rather than the original telephone (mobility is useful for one user even if everyone you talk to has a land line).

Monday, 26 July 2010

I've started putting together a very simple FAQ about SP (see the What is SP? (FAQ) link on the right). I've given up trying to format the text of the FAQ nicely as the blogger editor for such pages is unbelievably useless.

One key thing that is still missing from the FAQ is a basic description of the nodedge model. I will add this next as there have been some changes since the 2008 PDF 'paper' that is linked to from my first post.

Thursday, 22 July 2010

After many years working away at the ideas of Semantic Programming the project is finally getting towards the point where a release of a very simple 'SP agent' might be vaguely possible. So, in order to record more of my thinking in these last stages before the release (and then beyond) I have finally decided to start writing a blog about the project.