Wednesday, June 27, 2012

A .NET guy @Velocityconf 2012 - Day 2

So, the first day went great and the second one was even better. Actually, this was the first official day (not counting the workshops) and we get started with an awesome keynote.

The keynote

Jay Parikh from Facebook gave us a really inspiring speech about their company culture, work environment and of course some staggering numbers of their performance load. Something he said and I have already found to be very useful when getting new members into the team is the fast success. To give them the opportunity to commit a real peace of code in their first days in the new company is a big booster for what's coming next. I really enjoyed the short review of the Gatekeeper - the software Facebook wrote to manage to rollout of the new features. This kind of tools really make the difference between a just an ordinary software company which does what it needs for a living and an aggressive and innovative one - the automation is the key. Jay mentioned a lot of corporate values that we try to follow in Telerik, as well- point solutions, not problems; be honest; be a team player, not a player...

Two guys from Google Arvind Jain and Dominic Hamon talked about user experience and how we can improve it by preloading things. Something new for me was the:
<link rel="prerender" src="some_resource" />  for Chrome or
<link rel="prefetch" src="some_resource" /> for Firefox tag. Basically, this way you can advise the browser to prefetche some resources that you believe are more likely to be hit by the user. This is awesome and as you probably know, Google are using this technique for some time in their search engine and in the browser when typing for a url. What they do is to keep a track of what urls you choose with what probability when typing certain characters. This gives them the opportunity to preload amazingly accurate most of the resources you request. Just an example how effective this can be - Google has measured the seconds that they save for each user by preloading resources and it turned out that only for a day they save altogether 30 years of waiting to their users (for the search engine and the browser)! This is something we certainly want to use in our websites. 

We also got a very interesting talk from Richard Cook, who is not a programmer but has made a deep research on the topic "How complex systems fail". To sum up - statistics kills the details and if you want to really study and investigate a problem you should not observe the data from eagle eye, but get down to the single item level and study every single case individually. It looks like this has been the key to solving lots of mysteries in the human history and more specifically - that how the cause of the Cholera decease was found. By aggregating data you can even hide the problems and not now when you systems behave badly.

Performance implications of Responsive Web Design

Lately, we have been greatly using this technique. We just release a new version of TelerikTv (which I will blog about later) that is using a responsive layout to adapt to different screen parameters. What we should keep in mind when using responsive web design is that hiding or adapting content doesn't make it necessarily optimized for the particular device and you can even get a performance penalty when not implementing this technique in a proper way.

One thing to have in mind are the responsive images. Currently, it's very difficult to serve the proper image size for the specific device and layout. It looks like we need a new syntax in the in order to express different sources for our images. Otherwise, we can only adapt the visual content but the amount of data will still remain irrelevantly high. To do this today, you have several options neither of which are obvious winner and probably will be replaced in the future by a new standard introduced in HTML for example. Here is an interesting reading from A List Apart about this topic.

Roll back: the impossible dream

James Turnbull explained how hard is actually to make a real roll back of a system that is currently running and speculated on is this even possible in a system that is currently running. Something that worth mentioning that a lot of people count on their roll back procedures and have them as a possible last resort solution, but roll back routines are hardly ever tested and practised. Actually, performing an operation that one is doing for a first time when something terribly wrong has already happened and the pressure is enormous is probably not the best thing to do. May be we should not count on roll back procedures at all? May be we should invest time in actually solving the problem, instead of trying to rollback the systems that are constantly changing to their initial state without loosing the operational data.

As a side note, James mentioned about the myths that every company and team has about how to do certain things and how to don't do other. His advice was to go and re-think all the "We don't do things this way because, something terrible wrong happens every time we do it" statements we have used to use, find the cause and fix it. 

The expo hall

There are a big amount of companies presented at the expo area. Most of them, I haven't heard mainly because they are not targeted at the .NET world, but I found out that most of their tools can be useful in our environment. For example, a good impression I got for a company that offers performance dashboards as a service for you. By installing an agent that tracks the activities in your web processes on every live server you have, this agent collects data about windows performance counters for example and sends them to their service. There you get a nice presentation of your current live environment from all running servers. You can also add information from your logs, eCommerce solution and get all this data in one place. Something we should probably check out with our admin team.

Something I forget to mention yesterday was the webpagetest.org project. An online tool that makes a performance study on your website from different geo-locations with different browsers and gives you a nice analytical data about how your website behaves. Another interesting tool (still beta) that worth mentioning is http://httparchive.org/ - a nice source for statistical data regarding the http traffic worldwide. According to the trends shown there - Flash is steadily disappearing, websites are using more and more custom font faces for their representation, etc. Be sure not to miss it! 

Well, basically these were the highlights from today from VelocityConf. It has been a very busy day with lots of new information. I'm sure that tomorrow will be even better. 

Tuesday, June 26, 2012

A .NET guy @Velocityconf 2012 - Day 1

Thanks to my company, this year have the honour to attend VelocityConf - a conference about performance and optimizations organized by O'Reilly. Needless to say, that there are very few .net developers here beside me and my colleague - in fact we are still looking to find the first :). Still, the topics discussed here are general enough to apply to our developers that deal with web development regardless what technology and framework is used. This is my first conference outside Bulgaria and there are lots of everything stuff going on here in Santa Clara.

The first day was an optional one and consisted of four time slots with four different tracks. These had to be workshops, but I think almost all of them ended up like a regular sessions. Nevertheless, the topic and the speakers were very interesting, so I will try to present some useful abstract from those I attended.

The day started for me with the session "Understanding and Optimizing Web Performance Metric" by Bryan McQuade from Google. He explained in greater details how the browser's render works and showed us the most frequent problems we might get into it. He played with the PageSpeed Insights Tool and showed what usually slows the downloading, rendering and displaying of the typical web pages. We took a look at the Critical Path Explorer feature that will be of great help when optimizing the loading of your web pages. It now only shows you how many time each resources takes to load, but also reveals which resources is blocking the rest of the page, how much time it takes to apply the css for example to the web document and many other useful properties. Another great new thing I learned was the Navigation Timing API, which is basically a library that gives you information about how much time takes to get to different events while loading your web page. It starts with the DNS lookup, TCP connection establishing and ends with the completely loaded page. This API is part of a W3C specification and is implemented by most modern browsers - it's there to help you meassure the latency and find possible bottlenecks. My takeaways from this sessions are:
  • to be very careful with the document.write as this might greatly degrade the webpage rendering speed by blocking the processing of other resources. 
  • to be make redirects for mobile content cache-able per user as this will save some round-trips to the server (and round-trip to the server when browsing with mobile device can be quite expensive)
  • to include the complete certificate chain when using SSL in order to save additional requests
  • to specify the encoding of the resources in the response headers
The day continued with "Taming the mobile beast" by two guys from Google - Matt Welsh and Patrick Meenan. They presented a lot of tools for mobile development and more precisely performance measuring and optimization. The remote debugging feature of Chrome is something that worth mentioning. You can debug you web site by attaching your mobile device to you dev machine with USB cable. From there you can browse the website using your mobile connection (or WiFi one) and this way troubleshoot the real mobile experience (traffic is going through mobile carrier, rendered by real mobile browser, etc.). You can even inspect the DOM and you will get the selected element right on your device - really neat. In iOS 6 there will be similar feature for Safari. Those guys showed also numerous bookmarklets (bookmarklet is unobtrusive JavaScript stored as the URL of a bookmark in a web browser or as a hyperlink on a web page. "Wikipedia") like Firebug lite, ySlow mobile, jdrop (for sharing bookmarklets), dommonster (for inspecting the DOM), docsource, csses, snoopy, spriteme, navigation timing bookmarklet (that utilizes the mentioned above navigation timing API) and many others. Some things to have in mind when developing for mobile:
  • JavaScript is a lot slower when used in mobile browser (mainly because of the slower processors)
  • Mobile carriers use proxies that behave differently depending on the hardware and configuration used in the particular mobile operator. Testing with mobile connection is essential and WiFi cannot be a replacement.
  • Caching behaviour is different on different devices - in general it's much smaller, sometimes is not persisted after closing the browser or restarting the device.
  • Initiating a TCP connection is much slower operation, moreover it has different round-trip time in different countries
  • The LTE standard is getting popular and will get things better ... but not much, since the real bottleneck is not the speed, but the latency. It takes a several seconds for the devices only to negotiate for a radio channel with the cell.
  • Because of the above point - more parallel connections are not always coming for better. We get a performance penalty for negotiating for new connections.
After the lunch break Ian White from Neustar gave us the "Dev vs. Prod - Optimizing your site without making your build process suck). A great introduction in nginx - a lightweight server used mainly for serving static content. We have already discussed using this server for our resources with my colleagues and this session was a great demo of its features. Nginx proves to be a lot faster than IIS when comes to serving static content like images, stylesheets, etc. You get a great control on what is served to the client with what particular headers and properties. I hope that we will soon get the chance to put it into practice.

Last, I visited Baron Schwartch's session about "Benchmarks, Performance, Scalability and Capacity - What's behind the numbers?". Nice talk to gain general knowledge into the topic of preparing benchmarks and analysing charts and statistical data - not a big practical use for me, though. 

This great first day ended with a party by the pool area. I'm really excited and looking forward to day 2.

Monday, June 4, 2012

My impressions after developing with Visual Studio 2012 for a few weeks

Actually, I lied in the title. I first used Visual Studio 11 and now I switched to Visual Studio 2012 RC. For simplicity in this post, I will refer to all Visual Studio 11 betas with the new Visual Studio 2012 product name. I installed one of the first betas available on my work PC and tried to use it as a primary IDE. I will share with you, what I found useful and what still needs to be improved, according to me. I hope, my experience will help you start faster with the new environment and warn you for possible obstacles on your way.

The setup

The first thing that got my attention was that when you open a solution with the new Visual Studio there is a upgrade process like before. I had the impression that with this release Microsoft will allow us to work simultaneously with 2010 and 2012 and didn't expect to have upgrade process again. Nevertheless, all the project files are untouched, if you have to change them though with the new VS 2012 you will get some new elements in the configuration that won't prevent you from working with this project with VS 2010, which is really nice. Still, there were some changes to the solution file that made it unusable with  VS2010, so I made a copy of the solution file for working with the new Visual studio and left the old one untouched. This could be inconvenient if you frequently add projects or other solution items, since you will have to do this on both solutions, but happily we don't do this very frequently. (This might be subject to change in the next releases, I haven't check this in the Release Candidate. Still, having two solution files is not a big overhead, I think. EDIT: Thanks to Syd for updating me on this one, it looks like the issues I have been experiencing are no longer a problem in the latest RC.)

My very first thoughts when opened Visual Studio 2012

At first, I felt a little lost. Although, the menu structure and the different windows (Solution Explorer, Server explorer, etc.) are the same, you have to used to distinguishing buttons and icon not by color, but by shape. This was kind of frustrating in the beginning, but now after a few weeks I feel very comfortable with the new "metro" and "not-colorful" user interface. I almost immediately switched to the dark theme, which is really nice and the text highlighting is much better than my custom color theme I had in VS 2010. In the first versions, the background of some windows (Server Explorer for example) was white, which looked really bad in the dark theme, but I see now that this is resolved and the overall design when using the dark theme feels really native. 

Something else that amazed me - almost all my extensions that I used in VS2010 works perfectly OK with VS2012, from the day I installed it. Not only JustCode, Sitefinity Thunder and some other Telerik extensions that I have, but also Ankh (I use it at home only), Minifier and so on. The only one that I currenly miss is the "Spell Checker" extension that I really like and saved me many times from deploying typos and nonsense text. I hope that it will be ported, too soon.

Some features I find really useful

Well, For me the winner so far is the search functionality when adding assembly as a reference. It should been there much earlier I think, but better later than never, right? In fact, the whole dialog has been redesigned and the work with it now is much more fluent than before.

The Quick Launch box - although, I haven't utilized it quite good, yet I think this will be something I will use a lot in the future. It just takes some time to break your habits and change the flow you are used to, but this will payback very quickly. The operation you can do from this are numerous and this will be a great performance booster, I think.

The multi-display support is awesome. Much better than before - now I get two tabs out of the main window and snap them - one on the left side of my second monitor and one on the right side. This has to be done manually before and now I can use the win + left arrow and win + right arrow shortcuts to fit the windows where I want them. Moreover, I can work on a tab out of the main window and a word document at the same time, which is again very convenient. 

In the pending changes window, you can apply custom filter to list of files. This way you can much easier check-in all the *.config files for example and ignore all the rest.

The ghost tab, makes my workspace cleaner. Now, I can browse my code with flooding my tab list with tabs that, I don't actually need.

What I still don't like after several weeks work?

Some of the keyboard shortcuts that I'm used to are no longer there. For example, I used to build with F6 and rebuild with Ctrl+F6. For some reason, only Ctrl+Shift+B is available now which is not so comfy to me. I know that I can add them through the settings, but I try to modify them as little as possible in order to maintain the same experience when using VS on another machine. I guess, I will have to live with this.

There are some drastic changes in the pending changes window. It's now part of the Team Explorer window and the design is completely new. You have two different lists with files that will be included in the checkin and those that are excluded which is nice. Assigning work items to the changeset is not so easy though. The checkboxes are now removed and you have to either drag and drop the work item (I needed a week or two to figure what I have to drag and where exactly) or you can enter the work item ID by hand. After adding the work item you should pay attention to the action selected as it is not so noticeable, now. I ended up resolving several work items, when I just wanted association. Since we have a checkin policy that makes assigning work items mandatory - this is the change in the VS 2012 that bothers me most. Probably, I need some more time to get used to.

What I still miss in Visual Studio 2012?

I would greatly appreciate some transition between the previous version of VS and the next one. For example, I had to go and manually set all kind of settings that I used in VS 2010, right after I installed the new version - all the external tools I use, all the extensions, some settings like "Track Active Item in Solution Explorer" and so on. I would be great, if I was prompted to transfer this options from my previous installation and this would make my migration much more fluent and easier. This will allow me to focus on the new features and evaluating the new product, instead of bothering myself "What I missed to add to the new Visual Studio". Probably, this will be a great feature for VS 2015?

And this is shocking - after the second day, I don't pay attention to the ALL CAPS menus!! Even though I can rollback this behavior, very soon after start working with the new VS, I found out that this doesn't stand on my way (but doesn't help me either). I think, it didn't worth the trouble all these people complaining about this, having in mind all the other great enhancements included.

The bottom line is that Visual Studio 2012 is ready to be your primary IDE, today. Being stable and mature enough, I guess that more and more people will start their transition. I'm very interested to read your impression after some real live experience. We all watched the marketing presentation and the cool stuff Microsoft showed already, but what is actually useful for you? Feel free to left your comments bellow.