Surfulater, Under the Hood and Down the Road

I’ve been asked to write about my vision for Surfulater and decided a Blog post would be a good place for this. I’m afraid it is a bit long winded as I want to lay down some background material so readers will know where I am coming from. I’m told vision statements contain lots of motherhood gobbledygook. Excuse me for excluding such fluff and for not being as visionary as some may like.

I’ve been designing, developing and publishing software for over 20 years. For a number of years I worked with a team of programmers on vertical market applications, in a company of which I was a director. For the past 15 years I’ve worked predominantly on my own, on a product named ED for Windows which is a full featured programmer’s editor. ED is a very large and complex application, with a large and diverse user base who place many demands on it. It is a highly configurable application and can be extended via a built-in scripting language. It also supports some 35+ programming languages. Bottom line – a big, complex, powerful application that most people will never fully utilize.

For quite some time I’d been keen to develop other products and I finally made a small start in late 2003. I spend a lot of time on the Internet researching all manner of things. A lot of the time it is to do with programming, but also business, travel and other personal interests. I was very frustrated by the poor tools available to collect and save information that I found while surfing, and needless to say Bookmarks and Favorites just don’t cut it. So the idea for Surfulater was born.

This was a perfect new product idea for me, as it was something I desperately needed for my own use and at the same time I felt confident that other people had the very same need.

Having worked on a big and complex product (ED) for so long, I really wanted my next product to be much simpler, both in terms of its user interface and capabilities and in its programming. These were overriding goals from day one and will continue into the future.

Every product has people pulling it in all sorts of directions, including directions its designers never thought of, those in which they have no interest and more still to which the product is simply not suited. Software developers continually face the challenge of trying to meet their users’ needs. Often times this leads to software with too many features, which in turn makes it too complex for the average user to grasp let alone use effectively. Competition plays a part here as well, where developers feel they have no choice but to keep adding new features so their product appears to be as good as, if not better than their competitors. What a vicious circle this becomes. Our users also expect regular new releases, with more and more features, and if we don’t keep delivering who knows what they will do. See my Blog post on Creeping Featuritis for more on this.

I read this line recently: “Sometimes people you think are your customers aren’t your customers at all.” which, upon reflection, I have to agree with. Software companies that attempt to make their programs all things to all people may never achieve the success they’d hoped for. We must focus on a core set of capabilities, designed to solve a specific task or set of tasks. Adding endless fluff around the edges, to meet the perceived needs of users, will create a monster few people want to use.

This is a long winded way of saying that I won’t let Surfulater get pulled in all sorts of different directions. Extra features will be added only if they make sense to the core product and not detract from it. I won’t be adding four ways to accomplish the same task and I won’t be adding mind numbing commands like “Demote Children” or “Add Sibling” to the Knowledge Tree. If we do need to add more advanced features to cater for the needs of power users then a Professional version may be the answer.

Ok so now that I’ve told you what I won’t do, let’s move to a more positive stance and give you some information on where I want to see Surfulater go. The question that prompted this article was from Alexander Deliyannis:

“Could you share your product “vision” with us? From its name and description it seems clearly web-content targetted, however its feature set –and the development planned- sound more ambitious.”

The core focus for Surfulater is to make it quick and easy to capture information you see on the Web, then enable you to manage and organize this information, be able to quickly find it again, and finally, to add value to it by adding notes, cross reference links and by editing content.

It is clear that the underlying set of capabilities described above is applicable to a wider domain of information capture and management, and not just for grabbing stuff from the Web. This isn’t just a bit of good luck, but is integral to the design of the software from the outset.

I’d like to get a bit technical now and discuss some of Surfulater’s inner workings. This will help you better understand what Surfulater is capable of both now and in our vision of its future.

Depending on how you look at it Surfulater is either pretty dumb or pretty clever. It is dumb because to a large extent it doesn’t know what it is doing and it is clever because it delivers a lot of functionality in spite of this.

Surfulater needed to be as flexible as possible for several reasons. First I wanted to develop a core work horse that could potentially be used for a variety of different applications. Second I knew that I didn’t have a clear picture of its potential uses or users, so I needed to design a platform that was open enough to move in various directions, as required. Third I didn’t want the application to be locked in to a hard and fast set of code, that couldn’t be cost effectively re-used. Software re-use is a term that is bandied around a lot these days, with considerable effort devoted to it, but often with only limited success. With the cost and complexity of software development ever on the rise, the ability to re-use the code we write is critical for long term business success.

So Surfulater has a core set of basic capabilities which are completely independent of the application it’s being used for. These include:

  • A high performance XML engine which stores and retrieves information.
  • A tree component that can display information directly from the XML engine. Windows applications typically have to copy information between the tree and its data store and build the hierarchical tree. These processes can dramatically affect performance, especially as trees get larger. Surfulater does not have these performance impediments.
  • A Content window that displays information from the XML engine, based on a set of HTML Templates stored in the XML file. The point to stress here is that the application does not include any hard code which dictates how information is displayed. This is handled entirely by information stored in the XML file. This means we can add new display templates, display existing content in different ways and change existing templates, all without changing one line of code in the program itself. Compare this to the way other Windows applications work!
  • A Metadata system to control UI aspects from XML file information.
  • Dynamic HTML (DHTML) enables the user to expand and collapse content to get a better view of their information. It also paves the way for other UI capabilities in the future.
  • Style Sheets to control the look and feel of information displayed in the content window.
  • The ability to execute Javascript code to do things such as user interface operations.
  • A HTML editor which enables all content to be edited in situ. In situ editing allows you to work more effectively because your flow isn’t interrupted by shifting focus to a new window to perform data entry. You also get the benefit of working with exactly the same form that you see while viewing content, so there is no context switch.
  • A fast full text search engine with boolean and, or, not and wildcard operators that searches the XML engine directly.
  • A mechanism to enable cross-reference links to be added to content so you can tie related information together.
  • A mechanism that allows the same article to be placed in as many different folders as desired. Other programs only allow a record to be placed in one folder at a time, which is a major impediment for many people. ie. There is no single best folder for an article.
  • The ability to attach any external files to articles and store them as part of a knowledge base. For example Word, PDF or Zip files.
  • A system to prompt users and provide feedback without continually getting in their face and interfering with their work flow. This is primarily accomplished using pop-up tips that can be hidden as required on a per tip basis. Tip information comes from an XML file instead of being hard coded in the application.
  • Drag and drop, Cut, Copy and Paste, and Editing of tree items.
  • Sorting of XML content.
  • An integrated Help system built using Surfulater itself.
  • A Web Server.
  • A Database manager for efficiently storing non-text items.
  • An E-Mail component with MS Address Book integration for e-mailing content.
  • HTML and MHTML export components.
  • A Knowledge Base update system to enable new HTML templates to be added or existing ones changed.
  • A component to inform the user when a new release is available.
  • An integrated E-Commerce purchasing and licensing system.
  • Various other bits of clever code.

When you add all of these components together, and take into account that so much of the application is driven indirectly by what’s in the XML Knowledge Base instead of being hard coded, you have a very flexible system that can be used for a broad range of information management style applications.

Add to this the use of open standard XML for storing information and you’ve got a system which has very little lock-in, compared to many applications that store your data in proprietary format files or databases.

You are probably wondering by now what application specific code actually exists in Surfulater. Well there is code to acquire content from Web Browsers and from the Internet and save it in Surfulater, code to enable Web pages to be displayed in a Web Browser, some application specific menu items and that’s pretty much it. You will find this contrasts starkly with the design of most applications.

Time to return to the “vision statement” question.

My vision was to create an open ended, flexible software platform with a strong focus on code re-use that could be used to build a range of easy to use applications in a timely and cost-effective manner. The flexibility of the underlying platform flows through to the applications built on it, eliminating the hard wired, hard-to-change nature of most software.

XML and HTML were chosen as the core file types because they are easy to extend, don’t lock users in, and because they form a very powerful combination of presentation and structured information storage. In addition, they are plain ASCII text files that can be viewed and edited with any text editor.

I want to visit “ease of use” one last time. In the past few weeks I’ve looked at a number of programs that are somehow related to what we are doing. I try and do this on a regular basis. Some of these programs were so cumbersome and complex I had a hard time seeing who would ever want to use them. (Yes I know I’m incredibly biased).

In comparing Surfulater a few things really stand out. First, when you edit content you do it in the same window in which you view it, ie. in situ. This is akin to using a word processor. Conversely, most other programs open up a new window with a completely different layout, forcing you to switch context and reorient yourself. Next is the way Surfulater is free from the endless pop-up message boxes, of other programs, that require you to confirm some operation, or complain that you’ve done something wrong. Alan Cooper in his book “ABOUT FACE. The Essentials of User Interface Design” couches this in terms of programs that keep on nagging their users and treating them as idiots. Techniques can be employed to better manage many situations that otherwise typically interrupt the user’s work flow and train of thought. I’ve used such techniques wherever possible in Surfulater. Finally you will see that Surfulater doesn’t have any Configuration dialog, at least not yet anyway. These typically include options and options on options that most people don’t need and shouldn’t have to be concerned with.

Products like Surfulater need to keep out of your way as much as possible, step in as required and make tasks as simple to do as possible. There shouldn’t be a big learning curve and you should be able to put them down and pick them up again as easily as putting on your favorite pair of gloves.

Surfulater is the first product built using our new platform, and its success proves that our vision is on track. See: www.surfulater.com/success_stories.html

We aren’t finished by a long shot though. As I said earlier I wrote Surfulater because of a need I had. Software developers that are also end users of the product they are developing, have a far greater insight into and understanding of their users’ needs. Based on the excellent feedback we receive and my own notions, we have an evolving list of new features and capabilities that will further enhance and enrich Surfulater.

Some of these enhancements include:

  • Content markup (highlighting). Done
  • Create new articles from Clipboard content. Done
  • Copy articles between knowledge bases. Done
  • Content tagging (keywords etc.).
  • Filtered tree views.
  • Advanced Search.
  • Easy creation of new article templates.
  • Template extensions.
  • Automated content categorization.
  • Knowledge Base synchronization across PC’s.
  • Shared and collaborative use.
  • Access over the Web.

Some of these are short term and will be available soon, others are longer term goals. Each of these new core capabilities raises Surfulater up another notch and increases its appeal to a broader range of end users accomplishing a wider range of tasks.

Ken Ashworth sent me a long and detailed e-mail last week discussing his specific needs and wants. I want to include this brief excerpt:

“SUL’s use of xml is some of the best I’ve seen, and I am making progress with Custom Article Forms. You’ve packed a lot of punch into this program, but I think you’ve only scratched the surface on the potential of xml.”

Thanks Ken, you are absolutely right.

You are probably wondering what other applications we want to build using our new platform. Well right now we have more than enough on our plate, so you will have to sit tight for the time being. That said we do have some pretty interesting ideas bubbling away. If you have any suggestions please do let me know.

I hope you now have a better understanding and appreciation of what we are doing and what our vision is. Please do tell us what you think and share your views with us.

Thanks for reading.

11 Replies to “Surfulater, Under the Hood and Down the Road”

  1. Detailed, interesting, informative…it’s not often a software developer is willing to share views in this way. I hope the habit spreads. It goes a long way to creating a relationship of trust with customers.

  2. I don’t have a website other than a few blogs with pictures of Arizona’s beautiful sunrises on them.

    I come from a “strange perspective” though, I have never tried your product, haven’t seen it yet and the only thing I know of it is what I could gleen from your above article.

    I don’t have any software design background, no website ownership to speak of, but I was piqued by the idea of how many times I have wanted to “surf something I was checking out later” but like you said, “…just adding to favorites wasn’t enough”. Our thought train at the moment we discover something DEMANDS that we some how interject what we WANTED from the article or post or whatever.

    Without knowing what your product is, requires me to come up with my own idea of what it might look like, work like, etc. What that product came across as was a combination of “Word”, “Notebook” and “FrontPage”. Many times I have found a site through various “backdoors” not that I am a hacker or a cracker, I just am sort of a “information junkie”. Anyway I have in the past grabbed a complete web page and copied it with FrontPage and interjected any thoughts I had about it, either in the title of the file “saved as” or through a link to my thoughts written in Word or Notebook.

    I am going now to go check out what your “product” really is… Will I be pleased? Surprised? Or is your product nothing like what my finite mind could comprehend?

    Mike Feddersen

  3. Hi Mike,
    Thanks for stopping by and adding your comment. By now you have probably had your first date with Surfulater and I hope it will lead to a long term relationship. From what you’ve described, Surfulater should meet and exceed your expectations.

    >What that product came across as was a combination of “Word”, “Notebook” and “FrontPage”.

    I must say I’d never thought of Surfulater quite in this way, but as far as those products go that is a reasonable picture to paint.

    Surfulater does so much more though. For example saving information from the Web is a snap, organizing it, searching it, adding notes, editing, linking related information together, e-mailing it to your friends etc. These and other capabilties make it an ideal tool to let you permanently save anything you see on the web into your own digital library.

    FYI I’m an “information junkie” as well, which is precisely why I wrote Surfulater.

    Please do get in touch and let me know how you two are getting on. And if you have any questions or suggestions don’t hesitate to ask.

  4. Neville,

    I thoroughly enjoyed your article, and thank you for the mention. Sometimes it’s enjoyable to see your name in print.

    As a user one does not always know whom they are dealing with when buying into a program, and rarely does the average user converse directly with the author. Your article offered insights that should be helpful to all. I understand now that you DO have a grasp of what and where you want to go, both from your emails and from the article.

    Later,
    KenA

  5. Surfulator sounds interesting, but I’d much rather see a few screenshots of it in action than read a huge article. Why no easy to find links to screen shots?

  6. Hi Craig,
    Thanks for your comment. If you go to the Surfulater home page: http://www.surfulater.com you will see “View Screenshot” and “View Quick Tour Movies” images you can click on. These are also on the “Product Info” menu.

    The “Overview” page http://www.surfulater.com/overview.html also has some screen shots.

    Maybe you meant I should have included some screen shots in this article. If so I never thought of that!

  7. In looking over the list of future enhancements, now that the top two are done, I hope to see “Copy articles between Knowledge Bases” and “Knowledge Base synchronization across PC’s” real soon now.

    * Content markup (highlighting).
    * Create new articles from Clipboard content.
    * Copy articles between knowledge bases.
    * Content tagging (keywords etc.).
    * Filtered tree views.
    * Advanced Search.
    * Easy creation of new article templates.
    * Template extensions.
    * Automated content categorization.
    * Knowledge Base synchronization across PC’s.
    * Shared and collaborative use.
    * Access over the Web.

    The V1.93, B0.0 enhancements are super! – JML

  8. Hi John,
    Thanks for posting. Good to hear you like the new features in 1.93.0.0. V1.94 should be out shortly. 🙂

    “Copy articles between Knowledge Bases” is high up on the todo list.

    The need to access and sync up knowledge base files from different PC’s is one I am very keen to address. This would make Surfulater an even more valuable tool. My plans for this capability are slowly formulating, suffice to say it is reasonably complex to do.

  9. Great post! I just found out about SUL from a forum post I happened to be reading. A very happy serendipitous event to be sure! I’ve only had about five minutes on the application and already my brains-a-boil with ideas on useage. Thus this comment is a combination of suggestions and wishlist items:

    * With regards to the inter PC Syncing: I use several desktops and notebooks serially (and sometimes in parallel) and I regularly face the sync demon. To date I’ve only been using SmartSync pro and a bunch of standard profiles along with a regulated file structure that allows some sanity to prevail. I hope that SUL will lend itself to allowing KB libraries to be placed in alternate locations and that those locations can be set a default.

    * Another thought to sync’ing is utilizing WEBDAV and a serverside conduit to a database. I can visualize something like the Apache WebDAV connecting to a mySQL backend with SSL encrypted tunnels from client to server and posting clippings into that backend. Searching and indexing could use the Lucene project.

    * This same process lends itself to posting shared bookmarks also. I’ve used an aging project named Bookmarks4U to accomplish this but addition of new links is just via JS facilitated posts. The ability to export or sync to a facility would be beneficial in group/collaboration efforts.

    * That leads me to contention issues in a shared environment… My take is that all posts should be self contained (transaction based) and edits to prior posts be limited and if done also transaction based (al a diff) to allow for auditability.

    * The other (likely easier to accomplish) wish would be the ability to right click on an image on a page and have just that image added to an article and not necessarily created as an article itself.

    * The last thing I shall suggest is a published API that could allow SUL to integrate with other research tools like MindManager. If there was a way to create links from KB stored articles that can be attached to Mindmaps, that would help in establishing a reference tree for concepting and then in the documentation phase the ability to print and reference the materials.

    Thanks for the great work. I figure that I’ll likely buy the product before my trial period is up but I want to give it a good shake down before I shell out the hard earned bucks.

  10. Hi Michael,
    Thanks for the great post and for coming to visit. I hope your brain has cooled down. Things are definitely hot on the Surfulater front, a bit like the weather here in Melbourne this last weekend.

    There are so many great ideas bubbling to the surface for Surfulater it is making it a little difficult to ensure I remain focused and true. If you do decide to hop on board you’ll see this in our Surfulater Customer Forums.

    The biggest issue with Syncing is where the PC’s are at different locations, for example @home & @work, or when people are traveling. In these situations you don’t want to be sending large files around in order to sync up. My goal therefore is to only send the changes around. This is a fairly complex area which requires techniques like conflict resolution.

    The next release, out shortly includes bookmark support, with import capabilities from various other products.

    Sharing and collaboration are both areas of real interest. Static sharing is reasonably simple and we’ll see more of this capability before much longer.

    Collaboration is considerably more complex, as I’m sure you appreciate. Syncing may pave the way for this. We’ll have to wait and see.

    To add an image to an existing article right click on it and select Copy, then flip over to Surfulater and use: Append Clipboard to Field or switch into edit mode and use Paste.

    The ability to reference Surfulater articles directly from other applications is on the todo list and something I’m personally very much looking forward to.

    Surfulater is already quite open and folks are doing some interesting things.

    One of our new users Perry Mowbray, is currently writing macro add-ins for Word, Excell etc. to add enable content to be added directly into Surfulater from these applications. This is accomplished using a simple and open XML protocol. You’ll see more on this in our Forums, where you will also see plenty of Perry. 🙂

    Also note that most of your Surfulater content is stored in industry standard XML files, which makes it readily accessible by humans and other applications. And article templates are plain open HTML!

    Thanks again for your post. Hope to see you join us and help Surfulater evolve.

Comments are closed.