CMS

Re-Exploring the Idea of a C++ CMS.

dennis_ritchieHalf of my professional software career has been working with the LAMP stack and Content Management Systems - mainly Drupal. The half before it - the formative half - was with desktop programming, or as we called it back then - programming.

Since Drupal 5, I've toyed with the idea of writing a content management system in a compiled language such as C++. As Drupal has become more abstracted in layers, I've increasingly thought about it because in trying to be everything to everyone, Drupal does seem to becoming less of a content management system and more of a framework. Some already call it a framework, but I think it's more fair to say that it's moving toward becoming a framework.

But is a framework based on PHP really that efficient? From a coding perspective - from a development perspective - yes, it is. This is because development times will typically be lower with PHP, all things being equal. Still, as I have been pondering things that I would want to do with a framework such as Drupal, I often stop and say, "That query will take a while, so I'll have to build Yet Another Table on cron runs (as the search module does)". And then I consider how long code might take to run with some of the heavier algorithms I've considered for navigation and search.

The answer since the start of the Internet has always been to throw more hardware at complexity. For those of us who can remember the Wintel era of computing, it's basically the same model that allows more 'features' (or 'bloat') to run at an acceptable speed. This has been really good as content management systems such as Drupal, Joomla and yes, the heavily evolved Wordpress, to become the veritable powerhouses that they are now.

Who Needs What?

There are two sides to the content management market. There is the large corporation/organization side (some call it 'Enterprise') and there's the small side of the market. The most development money is spent by those on the large side of the market who need customization. The least development money is spent on the small side of the market, where the basic features - and usually only some of those features - are used.

In the context of Drupal, other than being unsupported, I don't expect that the smaller end of things will be throwing wads of cash at Drupal 8. In time, Drupal 6 will not be supported just as Drupal 5 no longer is, and the talk of Drupal 9 will continue. What will happen to the Drupal 6 folks? They'll either shell out the cost to upgrade to Drupal 8 or switch to something else. I'd wager that's the majority.

Most people will be looking at upgrading with nothing more to gain other than support - a distinctly Microsoft business model.

In thinking about all of this and discussing it with a few in the Drupal world, as well as wondering how to effectively use more complex algorithms such that they are usable for a browser, I began thinking of C++ again. Granted, you could do it in any language, but I know C++. I cut my teeth on C/C++ in the professional world. And I found some interesting things.

CppCMS

In looking around, I found CppCMS - A C++ framework, and for most people that might be just too much. There's CppBlog for those who only need a blog, and then there's WikiPP. From a back end perspective, these will likely be awesome - but I'm not sure what the front end customization will be like or how easily the framework can be extended. Still, it's something worth exploring - and it demonstrates that I'm not the only person who has been considering inefficiencies in present frameworks - and that I'm so late to the game that I might have very little to do. It's dual licensed, too, with the LGPLv3 or the alternative commercial license.
 

SnapWebsites

Another thing I'm planning to look at is Snap Websites. They say a mouthful right here:

The main idea of having this written in C++ is for speed of execution. PHP is an interpreted language and although it is fast to develop, it remains slow to execute.

The other big change from Drupal is the backend. We want to use a data manager, not a full blown SQL database. This introduces a problem: we cannot as easily gather data. However, quite often, this data gathering in Drupal can be quite slow. Not only that, it is difficult to maintain, difficult to backup for easy restore, difficult to move from one system to another. With a data manager that does all of that work automatically, the small draw back of some extra work for data gathering looks quite acceptable.

Another huge problem with Drupal: it is impossible to have a truly international website. I tried a couple of times, and it just is a nightmare to manage. You get different pages for different languages so each menu to that "one" page are different for each language. Imagine you have a documentation of 100 pages written in three languages, that's 300 links to manage MANUALLY! (if you want the site to work as expected, at least)

So, again, someone else has thought it through and has written something.

Other Languages

Yes, I know about node.js, as well as a million other better mousetraps out there. I'm not disrespecting their efforts at all - I'll encourage them - but to me, I think compiled code for basic features makes a solid bit of sense for a lot of people.

Am I sold on any one path? No. But it seems to me that we're coming to a juncture with LAMP based CMS's where at least some of the features should be compiled rather than abstracted out. It makes for easier support for the majority of users (not the high dollar users) and doesn't leave them holding an old unsupported version of a PHP CMS.

Conversely, it could also mean a lot of trouble for support as well. I'm not sold. My eyes, however, are open.

Tags, Time and Content Creators

You're next!This entry builds on the shoulders of 'The Trouble With Tagging' and 'Navigation in a Multidimensional World of Data'. If you feel like you're missing something, check out those entries.

In previous entries I mentioned the subjectivity of tags as well as the need to have more than one point to navigate from. While some of what I'm writing about has been done, it is masked by a single text box with a search button next to it - be it on this site or a search engine of your choosing.

Tag subjectivity really depends on the author, the time of the writing (what the tag meant at the time of writing) and the site on which the content is published.

The Author

As I mentioned before, there are two extremes of tagging that content creators are somewhere between. Simply put, these extremes are, 'be seen' and 'be accurate'.

We all know content that has been tagged to 'be seen' that isn't accurate. That's a constant battle with search engines against those that game the system to have their content seen, and the motivation for that is typically advertising. It's aggravating at times to type in a search phrase only to be inundated with a bunch of links best described as 'crap'.A photo I posted on Flickr, which I tagged very tongue in cheek, gets views because of the tags I used - and it's safe to say that someone searching for such things is more interested in content of another type. The same is true of this image. While both images are work safe (and very PG), people who search for certain keywords are likely upset with me because of the tagging. Of course, they won't complain, and I get a few chuckles.

Accuracy, on the other hand, is a bit different. Being a bit of a naturalist, I take photos of wildlife and tag them with their scientific names. A great example of this is this image of a young cane toad. Because I tagged it accurately, the image (with my permission) made it's way onto sites related to invasive species in Florida. In fact, images that I have licensed out have been tagged accurately - translating to 'getting paid'.

Getting a little bit ahead: Images that I have had a little fun with the tagging don't really earn. But then, I don't make money off of advertising on Flickr. In fact, Flickr doesn't make money advertising on Flickr.

Content creator subjectivity in tagging can allow for content to get views for the wrong reasons, or it can allow for content to get views for the right reasons. The wrong or right, despite what you may think as a content creator, is not up to the content creator. It's dependent on the audience and it's also dependent on time.

Time

Almost all of my content views do not happen when I publish the content. My experience is that my style typically gets more reads after a few months. We could attribute this to a lot of things such as popularity of the topic and popularity of the content creator. I've never really been interested in being popular - I've been popular for periods - but I've found things that I've written about have been popular and sometimes are cyclically popular.

Some things are timeless. Music, books, movies - even ideas - some of these are timeless. In the grand scheme of things, they represent a very small percentage of what has been created.

Can you name something created on the internet and for the internet that's timeless? There are some things that are, but when it comes to popular content on the Internet, you'll likely not find anything that stands the test of time.

Then we get into what tags mean. For example, prior to February 4th, 2004, the tags 'social media' and 'social network' would not have included Facebook. Prior to July, 2006, Twitter wouldn't have been encapsulated by those tags either. Why? Because they didn't exist prior to those dates. And when it comes to social media and social networking, in a popular sense, how many people even know about or remember Orkut? Relatively few, I imagine.

So what tags mean is dependent on when they were used - and even The Semantic Sphere 1: Computation, Cognition and Information Economy doesn't really speak to that issue. Symbols, words, meanings - they change.

Don't believe me? Look up a random word at Etymonline.com.

Tags and Searches

There are obviously a lot of issues with tagging content, and while their typical use of what's popular now, over time the tag degrades. It's not hopeless, though.

There are two things that can be done with searches - and tagging content in general - that can be done to assure that content stands the test of time. The content creator and the time of publishing, generally speaking, are methods of searching - and maybe we should be treating them as tags within content management systems. Sure, you can search by person, and on some sites you can even search between specific dates, but those are not the standard and they are not the standard because they were never designed this way. They are treated differently in databases, typically in different database tables altogether.

A few of you might see where I'm going with this...

 

The Problem of Atomizing a Social Network

Social Media MobI started writing last night about the Facebook solution and wanted to follow up on it in a more technical way. Most geeks will take a look at this and say, "Yeah, I know that", but there might be one or two surprises in here.

On the left, I decided to put my artistic skills1 to work. Behold a small part of the Social Media Mob. There are some things missing from their heads such as videos, checking into places and what-have-you, but the general stuff is in there. These particular stick figures were captured in the wild and released without being harmed - the point being that they do not roam in formation like this.

What glues these stick figures together are social networks. Things like Facebook, Twitter, Flickr, YouTube, JoeMama, etc. And all of these 'social networks' are services - they manage specific data that the user chooses to have managed. For sharing information, back in the days before these social networks (and a little after Orkut as I recall), RSS feeds sprang into being. Trackbacks were around for blogs and blogs were becoming simple enough that even people with nothing to write had blogs.

Atomizing Facebook Into Blogs

Social networks came along and aggregated a lot of this content under singular sites and pulled data in from other sites as well. Facebook did it so well that it went public for more than it was worth. Everyone but the investors were happy so Facebook decided to share the investor unhappiness with everyone else.

It is something that has bugged me for some time. When you look at Facebook, it's not all that complicated - at least on the surface. There's a lot of data involved which makes even simple database operations time and processor intensive - and intensive is an understatement. Still, in the broad strokes it isn't difficult to see that it's all streaming data that, depending on settings, can be seen or shared - hopefully in the way that the member of the social media mob wants. On Facebook, everyone basically gets their own site that feeds a datastream to those that they are connected to. The equivalent of multiple RSS feeds stream together to form a river. That, in and of itself, is not hard to do with blog software and content management systems. So that part is easy.

Deciding whose streams are shown is also easy. It's a matter of subscribing to RSS feeds. A 'friendship' on Facebook is basically a two-way RSS subscription. Facebook later allowed people to subscribe to such a feed without requiring one to show one's own RSS feed.

Sharing content is not hard. The tough part comes with privacy and what I call the matrix of intimacy.

Matrix of Intimacy

Some people say that there are degrees or levels of intimacy. This implies that there is some form of hierarchy- yet what people are intimate about varies. A matrix doesn't have that hierarchy. In fact, if we consider the matrix to have truth values for various things we share (Fuzzy logic) it becomes a lot easier to understand. For example, I might want everyone to read this blog entry but I might only want certain friends to read another. In the Real World, people might gather a group of specific friends for the latter. In business, they call a meeting that only involves the 'right' people. It's a matrix, not a hierarchy. To their credit, Google did do better at that with Google+ but the result was unwieldy.

There are some interesting ways a matrix of intimacy could be implemented. It could be according to tags as used on blog posts - where you might only want certain people to read things you write with the politics tag, as an example. Or the guysonly or girlsonly tags that may exist somewhere. Or even age ratings. Once you associate people with those tags, they get to subscribe to what you'll allow - and they may choose not to subscribe to all that you make available, so they would be able to choose from those tags as well. Suddenly, there's a better control over who sees what.

If you subscribe to someone who constantly posts about politics but every now and then posts something cool and interesting about technology, you can omit their posts about politics without losing their good posts on technology. This even gives someone better feedback on what their connections are interested in: if someone's blathering about politics and no one is reading it, they might consider refocusing their energies.

Of course, this implementation would need to be refined and it could very well be.

The Problem With Implementing The Matrix of Intimacy

Such implementations would require a handshake between multiple sites. With open source, it means that someone out there could break it and do bad things.

Months ago I had the idea for the Hedgehog Project, and the basis for that was privacy and the issue of intimacy, where there are varying levels of privacy dependent on the relationship. The idea was to use PGP and allow different sites to swap keys so that only the people intended to read something could. Unfortunately, beyond the obvious processing issues (encoding and decoding content on the fly at the server), there are a whole slew of legal issues related to PGP. Therefore, despite the increased ability of processors and cloud computing, the legalities make this dangerous ground.

Another way to handle this would be verifying the validity of the receiving software - but this would have to be done every time and would be problematic as well because it wouldn't be difficult to either hack some code around that or to simply intercept the communication.

Full Circle

Because of these issues related to privacy and the matrix of intimacy, it's not yet worthwhile to consider atomizing social networks in this way because, simply put, the systems can't be trusted. Yet. Maybe someone has some other solutions to the Matrix of Intimacy Implementation. Until then, social networks will be dependent on the companies that run them.

1Now you know why I prefer using a camera. My stick figures are awesome, of course, but I've learned that stick figures only get you so far. I failed Art in High School.

Pages