On Organizing Content, in Surfulater-NextGen

If you’ve read my earlier posts about Surfulater-NextGen (SNG) you’ll know I’m moving to using Tags as the way to categorize and organize content.

Surfulater’s Folders work pretty well and the ability to have an article in many folders at once is a great feature, which is not often seen. But it also has Tags, which means you have two quite different ways to handle organizing and locating content. And the Tags aren’t hierarchical! Finally putting an article into multiple folders is a little cumbersome.

To simplify and enhance content organization SNG uses Tags exclusively, albeit greatly enhanced from what you currently have. Tags can be nested, letting you use a neatly structured tags hierarchy or tags tree. And you can use as many levels as you want.

Tags Tree

Articles can have as many tags as you want. And Tags aren’t restricted to just a single word, as they are in some systems.

Article Tags
Tags in an article

You enter new tags in the Add Tags field.

The Auto-suggest dropdown list makes it easy to add the tags you want, including adding multiple tags at once and removing existing tags.

Tags Auto-suggest dropdown

If a tag doesn’t exist, click on Create to add it.

Create a new Tag

In this example I want to add a tag named XBMC. The Tags tree lets you select the parent Tag for this new tag. A given tag can be added to as many tree branches as you want.

This should give you a good overview of some of the Tags capabilities in SNG. I’ll continue with more on Tags in the next blog post.

Clearly I’ve been remiss in not posting on the blog in way too long, likely a record for me. Part of the reason is that I’ve simply had my head down working hard on SNG. I’m working on a lot of minutia and haven’t felt that I’ve had a lot to say, but in fact there is.  I’ve already got the next post in my head, and there is quite a bit I can’t write about I just need to force myself to do.

All the best,

What’s happening with Surfulater & what’s Neville up to.

Jim Parker just posted a comment which has prompted me to write, although it is fair to  say I should have blogged before now. In essence Jim commented that no development seems to have been done on Surfulater in a while and my lack of recent blogging leaves him less than inspired about it’s future. Without doubt very fair comments.

Surfulater got to a point where it was working pretty well and did most things our users had requested and that I wanted of it. There are problems no doubt, such as the poor foreign language support, especially with search. This dates back to a poor implementation decision I made in the very beginning of Surfulater’s design, one which cannot be undone or easily fixed, otherwise it would have been long ago.

To resolve the foreign language support problem and address some other issues requires an extensive and expensive rewrite, which can’t be justified. Furthermore I was beginning to see huge changes in the direction certain types of applications were heading and in the technologies being used to develop them. So it was time to step back and rethink the entire way I develop software, the technologies I use where all this fits in with Surfulater.

The huge game changer in my opinion is the move to applications that run on multiple devices (Desktop PC’s, Tablet’s, Smart Phones) and enable you to access your information on any of these devices, from anywhere. Furthermore you can add and update your information and have the changes available on each device automatically and in real time. This is where I’ve wanted Surfulater to head for quite some time, however it will never happen using the current C++ Windows Desktop code base.

To move forward it was time for a major change, time to move out of my comfort zone of Microsoft Windows Desktop Software development in compiled C++ with all the tools I’ve been using for many, many years to the new world of applications that run on multiple devices, using dynamic languages, with user interfaces that are rendered in HTML, and have severely limited local file systems.  Hard to imagine a more different world, but that’s what I’ve been doing most of this year.

This new world revolves around developing in HTML5, Javascript and CSS and packaging that up into applications that will run on a variety of devices. It has and continues to be a huge learning curve, one that is taking a lot of time and resources.

There was no way I was going to jump into the deep end and try to redevelop Surfulater, as I had way, way to much to learn first. So after much prodding from an old friend I began work on a project with him to create a range of iPad (Tablet) applications. Apart from taking quite a long time the results so far are great, with the first app nearing completion.

In essence I’ve designed and built an application that my partner Stefan will use to generate applications.  So there are two actual applications: the Builder that he uses and the Client Apps that the builder creates and that we’ll sell.

Both of these are written in HTML5, CSS and Javascript. The builder is by far the most complex, using a client-side database, HTML5 templating, local file system access, remote server uploads and downloads etc. It uses jQuery, jQueryUI, Knockout.js and a range of other Javascript plugins and libraries. It is pretty slick and easy to use, enabling Stefan to produce iPad applications quickly.

The end-user client applications use jQuery Mobile and jQuery and are packaged using Phonegap and other tools.

All up I’ve written around 7,000 lines of Javascript so far and have learnt one heck of a lot along the way. And this time around I’ve designed everything to work in any language! There is still some way to go before we’ll be ready to start shipping the first apps, but we are making excellent progress. Stefan was only able to start using the first Alpha release of the Builder about 6 weeks ago and has already completed the first application, which happens to the biggest and most complex of the series of applications we’ll be producing.

On the side I’m am spending as much time as possible researching ways to accomplish my ideal Surfulater type app, with information replicated on all devices. To say this is complicated is an understatement. The research alone is very much one step forward, two steps back, however I’ve never been one to be easily deterred.

So that should give everyone a clearer picture of what is happening in my world at this time.

If I don’t post again before Xmas, and given my track record of late I probably won’t, I wish you and yours a very happy and safe xmas and all the best for 2012.


PS. It’s nice to have blogged again.

Why I use Surfulater

I continue to be pleasantly surprised at the diverse range of uses Surfulater is being put to and the diversity of our customer base. This is both a strength and a weakness for Surfulater. It’s ability to be used so successfully by so many people for so many different tasks, is a real strength that speaks well for the underlying design and its flexibility and adaptability. The weakness comes in, in our difficulty in promoting Surfulater to such a diverse user base. It is clearly much easier to sell a product into a narrower, well defined market. That said we are making some steps to be more focused in our marketing efforts, which we hope will be fruitful. Of course for our users, this isn’t a weakness at all, far from it in fact.

And now to the real reason for this article. I’ve always been very interested in getting hold of real life user stories and I know our customers are interested in reading about how others are using Surfulater. J.William LaValley MD kindly stepped up and offered to write an article on his experience with Surfulater, which I present here in full and unedited, of course.

Why I use Surfulater.
I’m a “biogeek” physician who uses the internet for many hours each week for medical science research.  My projects require the ability to accumulate large amounts of related scientific articles and the capacity to access them, with annotations, comments and related links – quickly, efficiently and reliably.
I must be able to find new undiscovered relationships among complex textual and graphic information that has not been described before.  In the course of this study on the internet over the last 9 years I have tried numerous different applications to help me capture, store and organize the massive amount of information in this endeavor.
For a year I used Onfolio and it was slow, very cumbersome and inefficient – and it frequently crashed.  The files created in Onfolio frequently corrupted and could not be re-accessed – “not good”. 
Next, I tried Mind Manager.  It helped me map out the general organizational structure of my projects and to link data ‘notes’ and internet links to various topics.  Mind Manager was a better solution for me than Onfolio.
However, when using large amounts of textual data and related files (and I do mean really large) Mind Manager was (and is) slow and laborious to capture, link, organize, re-access, and use the information.  The biggest problem is the Mind Manager would frequently “freeze” when I tried to link the Mind Manager topics to Microsoft Word and Microsoft Excel files.  Sure, Mind Manager can perform this function as it describes…the problem is that it takes a l-o-n-g time to do so when there are multiple topics linked to multiple portions of the same Excel file – and frequently the entire app just “freezes” – not good.
The result is the data is lost in Mind Manager and the app had to be closed down, restarted and the work was often lost – “worse than not good”.
Then, I stumbled onto Surfulater.  “Surfulater” seemed like an odd word to me – yet it made sense because internet surfing is such a big part of my work.  Surfulater has trial version that is risk-free so I tried it.
Wow.  My professional life changed.  Surfulater is literally saving lives by the amazing functional ability to gather large amounts of specific, targeted internet information and data – “on the fly” with just a few clicks.
Surfulater allows me to now literally ‘zoom’ through large amounts of information very quickly.  With Surfulater, I immediately and easily organize information, link new data to related data in the same file, link new data to related data in external files and folders on the same and other hard drives, and efficiently copy data to easily accessed related databases.   Surfulater lets me easily include comments, highlighting, do text formatting and editing, capture graphics, add links from related web pages…and much more.
Quickly and easily, from within Surfulater (and without opening my email application) I send the captured data to colleagues by built-in email function that automatically loads their addresses with a single click.  They can view it in their email without Surfulater.
I send this same data by email to other Surfulater users who plug it into their Surfulater databases and can now use it for their work.  I use Surfulater to create quick and simple web pages of simple HTML from the data that I have created myself.  Amazing.
Surfulater lets me surf swiftly while nearly effortlessly ‘scooping’ up important relevant information, into efficiently organized, easily accessed information.
Surfulater organizes my information in simple-to-use tree format.  Links made in Surfulater are lightning fast – there are no “freeze-up” delays in Surfulater when you are surfing fast, capturing information quickly, and linking it for re-access later
Surfulater lets me search any information in the Surfulater database by text word and returns each ‘hit’ with a highlighted reference.  I can see each “article” with one click.   Surfulater lets me sort the information by date captured or alphanumeric order of the title of the “article”.   Surfulater has advanced sorting features that allows me to sort sub-topics only without having to sort the entire database.
Surfulater lets me copy and paste sub-topics of one database into other Surfulater databases in a simple 2-click step – “very handy”.
Surfulater is the best data-gathering tool for any serious internet surfer.  The Surfulater Forum is actually relevant and helpful for answering questions, solving problems, and requesting new features.
Surfulater creator and code-author, Neville Franks, is extraordinarily responsive in the Surfulater forum and in developing customer-requested features in each update.
If you surf the internet and you want to capture, organize, save, inter-connect, link, search, re-access, send, sort, and otherwise use the information for any reason, then your best solution is to “surfulate” with Surfulater.
A dedicated Surfulater

 J.William LaValley MD 

Thanks again William. If you would like to follow in Williams footsteps we would love to hear from you. Contact details in the usual place, here. 

Not Happy!

For the past week and a bit I’ve been doing a lot of research into synchronization techniques, client/server technology, tcp/ip, Windows and Unix sockets and the like. This is all related to the work I’m doing to enable Surfulater databases to be synchronized, either locally across a LAN or across the globe via the Internet.

Synchronization enables you to use the same Knowledge Bases on say your Work and Home PC and have them automatically kept in sync, so you don’t have to manually copy them back and forth. I use Surfulater on a Desktop and Notebook PC and regularly switch between the two, but first I have to copy all of my Knowledge Base files across, some of which are quite large. Then when I’m finished I have to ensure I copy them back. KB synchronization will do all of this for me, without me lifting a finger. The good news is I’ve got a proof of concept implementation working.

So you must be wondering what’s with the Not Happy! Well during my research I came across a particularly interesting article which I only had time to glance at, and put it aside to read in full later on. Well later on arrived last night and for the life of me I can’t find the slightest hint that the aforesaid article ever existed. I’ve searched my Surfulater Knowledge Bases, looked at the last few weeks articles in the Chronological History, searched my Web Browser Favorites and History on my Desktop and Notebook PC’s and I’ve come up completely empty. In complete exasperation I used Google to search for the terms that I thought should locate the Web page for me, worked through pages and pages of results, tried other search terms and after 2 hours gave up.

There is a lesson to be learnt here, and I for one should know it better than anyone. That’s why I’m not happy!

One or many Knowledge Bases and Tree Filters

Their are two schools of thought when it comes to filing information; store everything in one big file or break things up into smaller, easier to manage, separate files.

When I designed Surfulater I went down the path of one big file, thinking this would make Surfulater easier to use, because you didn’t need to go around opening, closing and creating files. It didn’t take long at all for our users to tell us they really did want to use multiple files and Surfulater was changed to suit. Their is no right or wrong way to handle this, although some may disagree. It is a matter of what works best for each individual.

The argument for one big file stems from the point of view that all content should be stored together in one place, and the software provides sufficient means to view and work with various subsets of information, to overcome the problem of managing and working with what otherwise might be overwhelming. For example you could hide all folders except those in a specific branch, or only show articles that match some search criteria. You can think of these as filters which strain out most content, leaving behind only the juicy relevant bits.

Surfulater already has some capabilities that let you restrict the content shown in the Knowledge Tree. For example you can hide all articles except those in a specific folder or hide all articles period, or show all folders fully expanded, without any articles. And there is the Chronological tree view that shows content according to when it was added. These are a good start but we need to do more and will.

Fortunately the tree control I’ve written for Surfulater is very fast and was designed to enable tree items to be shown or hidden at will, without having to re-populate the tree from scratch. You can see this for yourself by using F9 or Show Articles and notice that the tree is instantly updated, even when it contains a large number of items. Without this capability, filtering a large tree would be impractical.

So the groundwork has been laid to enable us to provide more ways to filter information to help you focus on what’s important at a particular point in time. I’ve got several ideas for filters including letting you create your own via say a search string. I’d welcome your suggestions on this.

I need to wrap up as I’m told some of my posts are getting a bit long. I’ll end with a few comments. Multiple knowledge bases currently work best for me. This may change as more sophisticated capabilities are added like filters and keywords, but I somehow doubt it. Trees are well trees, and the bigger they get the more time you waste working the tree, instead of getting things done.

Personal Knowledge Management

I’ve stumbled across several interesting articles on Personal Knowledge Management recently. The first is by Steve Barth entitled The Power of One and was published in Knowledge Management Magazine. Steve’s article discusses the importance of implementing knowledge management systems within an organization and includes information I’m sure will be of interest to all Surfulater users.

Personal knowledge management (PKM) involves a range of relatively simple and inexpensive techniques and tools that anyone can use to acquire, create and share knowledge, extend personal networks and collaborate with colleagues without having to rely on the technical or financial resources of the employer. Implemented from the bottom up by one knowledge worker at a time, these techniques can increase productivity and enthusiasm and help to build momentum that can overcome the technological and social barriers to top-down, enterprise-wide KM initiatives.


Information overload is a fact, not a theory, and there is evidence that most people lack the skills or tools to keep up in the Knowledge Age.

Steve talks about Personal intellectual capital and how employees can increase their value both within an organization and in a broader sense by using PKM techniques.

Getting a grip on the shifting mass of information is an important tactic, but using PKM techniques and tools, individuals can go farther, to enhance their abilities and career potential. Effectively managed personal knowledge assets become the currency of personal intellectual capital.

Surfulater is being used by a diverse group of people to collect and manage all sorts of information and of course build and retain knowledge. This article should be of interest to all Surfulater users, especially those using it within an organization.

Ontology is Overrated

Ontology is Overrated is a PodCast I recommend you listen to if you are interested in finding out more about organizing information. Clay Shirky gave this speech at the O’Reilly Emerging Technology Conference, held in San Diego, California, March 14-17, 2005. Clay talks about why conventional ways of organizing information via. categories and hierarchical trees is flawed and discusses alternatives, such as search. This is in line with my thoughts, some of which are here and comments from Surfulater users our Forums. Continue reading “Ontology is Overrated”

Knowledge Lost

Two articles caught my eye in last weeks Melbourne Age IT section. The first was about two Californian business men who are setting up cruise ships 5.3km of the coast of Los Angeles where they will employ 600 software developers per ship and have them working 12 hour shifts, 7 days a week. These foreign workers will be classed as seamen and be able to come ashore without requiring visa’s. The ships will cost $US10M a piece to fit out. At first I thought this had to be an April fools day joke, but apparently they are very serious about this. One has to wonder where we are heading with such goings on. I guess the ships aren’t heading anywhere and what about the workers?

The second article was about the large numbers of qualified IT staff that have been let go from Australian companies in the last 3 or so years in the name of downsizing. It seems that these companies are now realizing that they’ve lost a vast amount of knowledge from this process, knowledge that will be quite costly to recover, assuming of course that it can be. This also ties in with offshore development (and on cruise ships) and makes me wonder whether all of the valuable information that is built up off-shore can be transferred back to its owners, or do they write that off in exchange for the money they save using off-shore development.

As far as I’m concerned building and retaining knowledge and the intellectual property that flows from that is fundamental to the long term success of any business. It’s what gives us the edge.

Managing Knowledge Pt 1

I spend a reasonable amount of time reading about and looking at Knowledge Management (KM) style software. Lots of different types of programs can be used or abused into performing knowledge management tasks. These range from storing bits and pieces of information in Word Documents or text files (or PDF files!), to Outliners and Notepads with Trees to structure and categorize information, to ever more complex programs that morph trees and display them as graphs, or in other visually exciting and sometimes useful ways. For example programs like Grokster use circles within circles where you drill down deeper and deeper to see things of interest. (Surfulater customers who visit the forums will have seen threads about this there.) The higher end knowledge management tools tend to be quite complex and expensive beasts indeed. Continue reading “Managing Knowledge Pt 1”