“The web is a fine place and worth developing for”, Ernest Hemingway may well have said if he had been born into the life of an Internet software developer. I like to think that I am suitably exploring what his path would have been in the world of technology if he were an Internet software developer; trying out all kinds of exotic technologies, wondering how they can co-exist, working out the peculiarities that they may hide. All this in order to deliver something complete and that is a source of fascination to others.
If that seems like an overly contrived first paragraph for a technology blog then it probably is. This is, after all, the first blog entry that I have ever written. But I have done it; I have successfully entered into the world of blogging and have chosen to begin by writing about my experiences of working with .NET, Sitecore and related web technologies.
Sitecore is, of course, the extremely flexible, industrial scale, content management system built on Microsoft’s .NET technology, and is capable of serving content in multiple languages. It is fast becoming de rigueur for powering complex corporate websites.
At the time of writing, and almost to the day, I have 3 years of experience of working with Sitecore. In this time I have led teams to build large-scale websites that have taken data from multiple sources to produce very rich websites that are managed through Sitecore. Also, whilst wearing my technical architect cap I have pulled together multiple data sources, digital asset management systems, content delivery networks and other bespoke applications to produce a cohesive Sitecore website solution.
I have also worked with a variety of other technologies such as WinFX, the evolutionary step before WPF and Silverlight and now called .NET Framework 3.0, Telligent Community and also various tools to aid in deployment such as CruiseControl.NET and MS Deploy.
Monitoring closely the performance of the websites that I build and understanding how well they perform under duress has been an essential part of my work. This is because some of the sites that I have worked with have served 15.5 terabytes of data over a 4 day period, at a maximum rate of 1,174MBits/sec. Working out ways of getting the most out of the sites that I build, checking for and fixing memory leaks and avoiding performance pitfalls have been key to be able to produce this level of throughput.
Through frequent blogging I plan to document and impart some key aspects of my knowledge so that it may benefit others who are either planning to work with the technologies that I have familiarity with or are planning to do so in the near future. Given the aspects of work that I have been involved in, readers of this blog can expect to gain general useful nuggets of information for working with these technologies, strategies for identifying performance and memory issues and fixing them, and some best practices to follow.
If I get enough readers I may also receive some feedback which would improve my understanding, and this would be an amazing result.
I’m sure Hemingway would have approved.