DevNexus 2016 – Top 5 Reasons Improvements Fail

DevNexus is a developer conference here in Atlanta that’s been around since 2004. Last week I attended the conference, and specifically saw presentations from the Agile, Javascript, and Architecture tracks.  This post will the first with some of my impressions of the event, and relaying some of the information I learned.

Top 5 Reasons Improvements Fail (Agile)

This session discussed how to identify team performance issues and why they’re probably not in the areas you thought they were.  The 5 reasons mentioned are

  • Broken Target – identify the real problems at hand, don’t assume implementing best practices will fix things
  • Broken Visibility – measure where time is actually being spent: troubleshooting, learning, or doing rework
  • Broken Clarity – stop making generalizations, identify the specific patterns
  • Broken Awareness – break hindsight bias and stop and think
  • Broken Focus – improve visibility of root problems to management

Broken Target

The lesson here was to make sure you’re trying to solve the right problem.

Situation: A team was spending lots of time doing bug fixes every time they were getting ready to release to production.
First Solution: They added hundreds of unit tests for their code.
Outcome: The team was spending the same amount of time fixing bugs before releasing.

The team thought that their code quality was too low, so their first assumption was that they should add automated tests and that would fix their quality issues. They spent a a big chunk of time, added unit tests for all their code, but still ran into the same problem before releasing.  Therefore, their initial solution was not the right one.

The speaker then gathered data on the team’s performance and behavior over time and proposed a new solution.

Second Solution: Make smaller releases.
Outcome: Less time was spent fixing bugs before each release and overall.

The real problem that the speaker was able to identify was that the team was rushing to complete work towards the end of every sprint and especially towards the end of a release.  Then because they rushed, there were more bugs in the code.  Even when they moved on to working on code for the subsequent release there were still hidden bugs and so each pre-release debugging period grew longer and longer, creating an order of magnitude cost increase.  The real solution then, was to make “as small as possible” releases. With this release cadence, even for the same number of total features there was less overall time spent on bugs and faster times to release.

Broken Visibility

Identifying the right problems can be difficult.  Even if a team has a retrospective at the end of every sprint or iteration, the issues that come up there may not be the larger, more important ones.  The issues will most likely be what the developers feel most strongly about, however this list of issues is influenced by both recency bias – whatever problem came up most recently, and guilt bias – whatever problem is related to something caused by an individual developer. Additionally, proposed solutions that come up during this time are affected by known solution bias – using a solution you’ve used before without checking if its the right one first, and sunk cost bias – continuing down an existing route because you’ve already put a lot of effort into it without considering alternatives.

To aid in identifying the right problems, the speaker created some custom tooling that gathered data on how the developers were spending their time.  Developers use the tooling to indicate when they run into some sort of friction during development.  The three categories of friction are troubleshooting, learning, and rework.  This allows the determination of where time is actually being spent and then you can further drill down to the root causes from there.  For instance, realizing that most of your team’s friction is spent troubleshooting code written by other teams was costing yours 1000/hours a month allows you to address that specific problem.

Broken Clarity

Shared understanding is key.  Generalizations allow people to communicate without given specifics and thus without a guaranteed shared understanding.  For instance, if developers and the business consistently walk away from discussions each believing they’e on the same page, then during the business review of a feature the business is upset because their expectations weren’t met but the developer thought they built it exactly as specified, there’s a communication issue. One solution to such a communication issue, which aids in reaching a shared understanding and removing generalities from conversation, is to create a glossary of the terms used by your team and what they mean.  That grants everyone a common language and understanding and actually reduces the defect rate as a result.

Broken Awareness

Sometimes people make decisions without even being aware of them.  You can review a situation and help them understand that a different approach may have been better but every time a similar situation comes up, they repeat their original behavior.  Certain behaviors are just on auto-pilot and unless you can get someone to stop and think before a decision gets made, they’ll choose the same path every time.

Broken Focus

There is always some sort of pressure on development teams from management to accomplish certain goals, and it can be tricky to get management to understand the need to invest time and money into certain things now over new features.  This “wall of ignorance” between development teams and management needs to be overcome for both sides to feel like they’re being heard and the team is heading in the right direction.  The speaker encourages teams to use a “risk translator” to make concepts that are important to developers transparent to management.  In particular, quality risks which correspond to troubleshooting costs, familiarity risks which have a cost to learn, and assumption risks which can cause rework to be needed.  A gambling metaphor was suggested to translate the ROI of a given desired priority – for instance hurrying and cutting 40 hours off development now, increases the chance of a 400 hour time sink by 20% later.

The speakers suggests that taking a 3 month trial to measure data will identify the biggest problem areas and thus priorities to focus on to reduce friction areas. Some areas that many teams acquire technical debt that generate friction are test data generation, missing diagnostic tools, and environmental problems.  These are areas that teams routinely spend significant portions of time on that could be reduced by spending some time up front to address needs in a given area.

Conclusion

Overall it was an interesting and well presented session.  It wasn’t so much that any of the ideas were particularly new but putting so much focus on getting data to support the decision making and problem identification process showed a different approach that allows for quantitative evaluation of team performance – without using velocity.  However even just the basic message of “stop and think – what is the real problem?” is useful and important to keep in mind so you don’t focus time on the wrong areas.

More reviews to come.

Improving Scalability and Performance with Asynchronous Controllers

The Task-Based Asynchronous Pattern provides a simple and easy way to improve the performance and scalability of MVC 4 web applications.  Every web application has a finite number of threads available to handle requests, each of which has a certain degree of memory overhead.  Asynchronous methods take the same amount of time to run as synchronous methods, however asynchronous release their executing thread, allowing more simultaneous requests to be processed.

Asynchronous methods are not appropriate for every situation.  Synchronous methods are more appropriate for short simple operations, and operations that are CPU intensive.  Asynchronous methods are appropriate however in situations that parallel processing is possible, situations that rely on network requests, or any long running requests that are holding up the site performance.  Additionally, any process that you want to be able to cancel before completion should be asynchronous.  Figure 1 shows the results of a test where the IIS thread limit was set to 50 and the number of concurrent requests increased over time.  This figure makes clear the advantage of asynchronous methods over of synchronous ones in situations with a high volume of concurrent requests.


Figure 1 Synchronous vs. asynchronous response times by number of concurrent requests

With the inclusion of the Task library as well as the await and async keywords, asynchronous methods have become much simpler in MVC 4, though asynchronous controllers are possible as far back as MVC 2.  Figure 2 shows side-by-side synchronous and asynchronous method calls to achieve the same end result.  The key changes to make a method asynchronous are

  • Changing the return type to Task<ActionResult>
  • Using the async keyword to mark the method
  • Using the await keyword on the GizmoService call

(Appending “Async” to the method name is not required, but considered good practice)

Controller Methods

Synchronous

public ActionResult Gizmos()
{
    ViewBag.SyncOrAsync = "Synchronous";
    var gizmoService = new GizmoService();
    return View("Gizmos",  gizmoService.GetGizmos());
}

Asynchronous

public async Task<ActionResult> GizmosAsync()
{
    ViewBag.SyncOrAsync = "Asynchronous";
    var gizmoService = new GizmoService();
    return View("Gizmos", await gizmoService.GetGizmosAsync());
}

GizmoService Methods

Synchronous

public List<Gizmo> GetGizmos()
{
    var uri = Util.getServiceUri("Gizmos");
    using (WebClient webClient = new WebClient())
    {
    return JsonConvert.DeserializeObject<List<Gizmo>>(webClient.DownloadString(uri));
}
}

Asynchronous

public async Task<List<Gizmo>> GetGizmosAsync()
{
    var uri = Util.getServiceUri("Gizmos");
    using (HttpClient httpClient = new HttpClient())
    {
        var response = await httpClient.GetAsync(uri);
        return (await response.Content.ReadAsAsync<List<Gizmo>>;());
    }
}

Figure 2 Synchronous vs. asynchronous method examples

Task<TResult> is a class in the System.Threading.Tasks namespace, and provides a hook to notify any listener that work has been completed.  The async keyword indicates to the compiler that the function is asynchronous, and is meant to be paired with a corresponding await keyword.  Without using await, the method will be executed synchronously.  await releases the controlling thread back into the thread pool, and that execution will continue when the corresponding task finishes.  In this case, GizmosAsync() will run on an IIS thread, same as Gizmos() until it executes “await  gizmoService.GetGizmosAsync()”, then the IIS thread will be released back to the thread pool until GetGizmosAsync indicates that it is ready, at which point an IIS thread will pick up the execution and finish the method.

Use of Asynchronous methods also allows the performance enhancements offered by parallel processing.  Figure 3 shows a synchronous and asynchronous method for getting three lists of objects.  The synchronous method must execute each request sequentially before returning a value; the asynchronous method requests all three requests simultaneously and so can return a value as soon as the last Task completes.

Controller Methods

Synchronous

public ActionResult PWG()
{
    ViewBag.SyncType = "Synchronous";
    var widgetService = new WidgetService();
    var prodService = new ProductService();
    var gizmoService = new GizmoService();

    var pwgVM = new ProdGizWidgetVM(
        widgetService.GetWidgets(),
        prodService.GetProducts(),
        gizmoService.GetGizmos());
    return View("PWG", pwgVM);
}

Asynchronous

public async Task PWGAsync()
{
    ViewBag.SyncType = "Asynchronous";
    var widgetService = new WidgetService();
    var prodService = new ProductService();
    var gizmoService = new GizmoService();
    var widgetTask = widgetService.GetWidgetsAsync();
    var prodTask = prodService.GetProductsAsync();
    var gizmoTask = gizmoService.GetGizmosAsync();

    await Task.WhenAll(widgetTask, prodTask, gizmoTask);

    var pwgVM = new ProdGizWidgetVM(
        widgetTask.Result,
        prodTask.Result,
        gizmoTask.Result);    
    return View("PWG", pwgVM);
}

Figure 3 Synchronous vs. asynchronous method executing multiple tasks

A final feature of asynchronous methods is their ability to be cancelled before completion.  This allows users to cancel long-running tasks without having to take an action such as leaving or refreshing the page.  Additionally, it provides for the automatic time-out of potentially long-running functions, throwing the designated error.

[AsyncTimeout(150)]
[HandleError(ExceptionType = typeof(TimeoutException), View = "TimeoutError")]
public async Task GizmosCancelAsync(CancellationToken cancellationToken )
{
    ViewBag.SyncOrAsync = "Asynchronous";
    var gizmoService = new GizmoService();
    return View("Gizmos", await gizmoService.GetGizmosAsync(cancellationToken));
}

Figure 4 Example of a cancellation token in an asynchronous method

As these examples show, asynchronous Controllers in MVC provide for improved scalability and performance over the use of just synchronous methods.  Asynchronous Controllers and methods can and should be utilized in any MVC web application that needs faster performance during long running processes or will handle high-volume requests.

References

  1. http://msdn.microsoft.com/en-us/vs11trainingcourse_aspnetmvc4_topic5.aspx
  2. http://www.asp.net/mvc/tutorials/mvc-4/using-asynchronous-methods-in-aspnet-mvc-4
  3. http://dotnet.dzone.com/news/net-zone-evolution
  4. http://blog.stevensanderson.com/2010/01/25/measuring-the-performance-of-asynchronous-controllers/

HTML5 Tips and Tricks: Drag and Drop

A useful feature in HTML5 is native browser support for drag and drop (DnD). HMTL5 drag and drop supports the designation of any element as draggable, as well as setting any element as a drop target, specifying the behavior for each step along the way. Additionally, drag and drop across browser windows and even to and from the desktop is supported. Drag and Drop is natively supported by all major browsers, and partially supported as far back as IE5.

By default, only images and hyperlinks are draggable. To allow other elements to be dragged, simply declare an attribute draggable and set to true. This will allow the user to drag the element and causes the drag events (dragStart, dragEnd, and drag) and drop events (dragEnter, dragOver, dragLeave, and drop) to fire. Elements can be set to copy their content, move completely, or simply link back to the source element. An important note is that elements are not eligible drop targets by default, and so the default behavior of the dragOver and dragEnter events must be cancelled to allow dropping. A simple example of HTML5 drag and drop is given in Figure 1.

<div draggable="true" id="dragBox">
<div id="dropBox"></div>
<script type="text/javascript">// <![CDATA[
var drag = document.querySelector("#dragBox");

addEvent(drag, "dragstart", function (e) {
    // required for non-text, non-image objects otherwise doesn't work
    e.dataTransfer.setData("Text", "data value as text");  
});

var drop = document.querySelector("#dropBox");
addEvent(drop, "dragover", cancel);   // Tells the browser that we *can* drop on this target
addEvent(drop, "dragenter", cancel);  // Tells the browser that we *can* drop on this target

addEvent(drop, "drop", function (e) {
    if (e.preventDefault)  
        e.preventDefault();  // stops the browser from redirecting off to the text.
    this.innerHTML += "

"
+ e.dataTransfer.getData("Text") + "

"
;
    return false;
});

function cancel(e) {
    if (e.preventDefault)  
        e.preventDefault();
    return false;
}
// ]]></script>

Figure 1 Simple example of drag and drop.

Some critics call the system cumbersome because of the number of event and event handlers involved, especially since the API is based on an older IE5 implementation. However, native cross browser support of drag and drop without relying on external libraries is a useful feature that developers should be aware of, especially with the inclusion of support for file drag and drop to and from the desktop.

Further Reading

  1. http://www.html5rocks.com/en/tutorials/dnd/basics/
  2. http://www.quirksmode.org/blog/archives/2009/09/the_html5_drag.html
  3. http://html5doctor.com/native-drag-and-drop/
  4. http://dev.opera.com/articles/view/drag-and-drop/
  5. http://www.w3schools.com/html/html5_draganddrop.asp

HTML5 Tips and Tricks: Web Storage

HTML5 is the new web standard and features of it are increasingly being supported by all the major browsers.  A key feature of HTML5 is the ability to store data on the client.  This allows a website or app to be used offline and upload information when a connection is next available, for user preference data to remain client-side, as well to reduce bandwidth required to use the site by implementing client-side caching.  This Web Storage feature is available across all modern browsers, including IE 8+.

HTML5 Web Storage provides a localStorage and sessionStorage object accessible via javascript.  These objects use a simple key-value pairing to store string-based data. The amount of data allocated per site varies by browser but ranges from 5 to 10 MB per domain.  The localStorage object acts as a persistent data storage that retains data after the window or browser is closed. Data stored in the sessionStorage does not persist after the window has been closed and is compartmentalized from other windows – otherwise it functions the same as localStorage.

sessionStorage.setItem("key", "value");
var value = sessionStorage.getItem("key");

localStorage.setItem("key", "myValue");
localStorage.getItem("key");    // returns "myValue"
localStorage.length;            // is equal to 1 in this case
localStorage.key(0);            // gets value by index
localStorage.removeItem("key"); // removes the key-value pair from
                                // the storage dictionary
localStorage.clear();           // removes all key-value pairs

window.addEventListener("storage", function(event) {
// event.key, event.oldValue, event.newValue
});

localStorage.setItem("user", JSON.stringify( { user: "John", id: 1 } );
var user = JSON.parse(localStorage.getItem("user");

Figure 1 Simple examples of using HTML5 Web Storage objects.

Figure 1 gives some examples of the manipulations you can make on the data in the Web Storage objects.  An important thing to remember is that only string values can be stored, so more complex objects can be stored using the JSON.stringify() function and retrieved using the JSON.parse() function.  Additionally, event listeners can be attached to the storage objects allowing multiple windows using the localStorage object to stay in sync and prevent race conditions.

localStorage is a good solution replacing cooking, retaining data such as user preferences that remain client-side, keeping data past a page refresh, and allowing apps to be used offline.  sessionStorage is a good solution for things such as shopping carts, or sensitive data that should be disposed of after the session is complete.  With the widespread adoption across all browsers and a simple API, Web Storage is a valuable tool to be used in web site or app development.

Additional Resources

  1. http://www.w3schools.com/html/html5_webstorage.asp
  2. http://dev.w3.org/html5/webstorage/
  3. http://sixrevisions.com/html/introduction-web-storage/
  4. http://www.html5rocks.com/en/tutorials/offline/storage/
  5. http://paperkilledrock.com/2010/05/html5-localstorage-part-one/

Exchange Web Services (2007) – Part Two

This is the second part of a two part series describing how to connect to an Exchange Server, monitor for new mail, and locally download attachments.  The previous post described how to connect to the server and create a subscription detailing what notifications you wanted to receive from the server.  This post will continue on with actually requesting those notifications.

So every poll tick, we want to get any new notifications.  The GetSubscriptionItems() function gets any new event notifications, loops through them and retrieves the actual mail message, then if it has attachments, retrieves the attachments.

protected void GetSubscriptionItems()
{
BaseNotificationEventType[] events = GetEvents(SubscriptionInfo);

// Go through the event list that was returned. The server always returns at least one Status event
// which let's the caller know that the subscription is still alive (in push subscriptions scenarios,
// the Status even is sent periodically as a heart beat signal).
// In this example, we are only interested in ObjectChanged events, so we'll ignore any other event type.
foreach (BaseNotificationEventType evt in events)
{
BaseObjectChangedEventType objectChangedEvent = evt as BaseObjectChangedEventType;
if (objectChangedEvent != null)
{
ItemType item = GetItem(((ItemIdType)objectChangedEvent.Item).Id);
Logger.WriteLog("Received new message: " + item.Subject);
AttachmentType[] attachments = item.Attachments;
if (attachments != null &amp;&amp; attachments.Length &gt; 0)
GetAttachments((MessageType)item, attachments);
}
}
}

The GetEvents() function of course is the initial request to the Exchange service to get any new updates. In it you first create a GetEventsType object, similarly to all the other requests. Then you assign it the most recent watermark and the subscriptionId – the watermark updates with every request. Then after checking for errors, you use the GetEventsResponseType property Notifications.Items array as the return value with any actual notifications.

///<summary> /// Gets the latest events for a specific subscription.
/// </summary>
/// Subscription for whic to get the latest events.
/// Array of notification events.
public BaseNotificationEventType[] GetEvents(SubscriptionInformation subscriptionInfo)
{
// Create a GetEvents request
GetEventsType request = new GetEventsType();

// Setup the request
request.SubscriptionId = subscriptionInfo.Id;
request.Watermark = subscriptionInfo.Watermark;

// Call the GetEvents EWS method
GetEventsResponseType response = ServiceBinding.GetEvents(request);

// Extract the first response message that contains the information we need.
GetEventsResponseMessageType responseMessage = response.ResponseMessages.Items[0] as GetEventsResponseMessageType;

this.ThrowOnError("GetEvents", responseMessage);

// Update the watermark of the subscription info.
// GetEvents returns an updated watermark with every event. The latest watermark has to be passed back the next
// time GetEvents is called.
subscriptionInfo.Watermark = responseMessage.Notification.Items[responseMessage.Notification.Items.Length - 1].Watermark;

// Retrun the array of events.
return responseMessage.Notification.Items;
}

Back in GetSubscriptionItems, each notification is checked to ensure it is the correct type and the corresponding actual email is retrieved from the server, via the GetItem function. An ItemType actual encompasses emails, calendar items, tasks, and contacts. In this case we’re only interested in MessageType objects, so the results of this get cast later.

You’ll also notice that I’m adding an AdditionalProperty to the BaseShape. By adding the itemMimeContent to the BaseShape, the retrieved item will be able to be serialized to an .eml file and saved locally as a backup. .msg files cannot be saved. The rest of the code follows the fairly standard Exchange service procedure – make a GetX object, set the properties, get a GetXReponse from the function call on the ServiceBinding, check for errors, then return the result.

///<summary>
/// Retrieves the details of an item on the server.
/// </summary>
///Id of the item to retrieve the details of.
/// ItemType object containing the details of the item.
public ItemType GetItem(string id)
{
// Create a GetItem request object
GetItemType request = new GetItemType();

// Setup the request object:
// Set the response shape so it includes all of the item's properties.
// Two other base shapes are provided: IdOnly and Default
request.ItemShape = new ItemResponseShapeType();
request.ItemShape.BaseShape = DefaultShapeNamesType.AllProperties;
request.ItemShape.AdditionalProperties = new BasePathToElementType[1];
PathToUnindexedFieldType prop = new PathToUnindexedFieldType();
prop.FieldURI = UnindexedFieldURIType.itemMimeContent;
request.ItemShape.AdditionalProperties[0] = prop;

// Setup the array of item ids that we want to retrieve (only one item in this
// example)
ItemIdType itemId = new ItemIdType();
itemId.Id = id;

request.ItemIds = new BaseItemIdType[1];
request.ItemIds[0] = itemId;

// Call the GetItem EWS method, passing it the request that we just set up.
GetItemResponseType response = ServiceBinding.GetItem(request);

// Extract the first response message that contains the information we need.
ItemInfoResponseMessageType responseMessage = response.ResponseMessages.Items[0] as ItemInfoResponseMessageType;

// Fail if that response message indicates an error.
this.ThrowOnError("GetItem", responseMessage);

// Finally, return the item detail.
return responseMessage.Items.Items[0];
}

After getting the actual email message, then any attachments to that email are retrieved. This is done by creating a GetAttachmentType object and setting IncludeMimeContent to true on the AttachmentResponseShapeType. Then you set the array of RequestAttachentIdType to the AttachmentIds property of the GetAttachmentType object, and make the actual request to the ServiceBinding object.

Cycle through the results and verify that each does indeed have a file attachment and that it is the desired type (pdf in my case), then save each pdf to disk.

protected void GetAttachments(MessageType item, AttachmentType[] attachments)
{
GetAttachmentType getAttachment = new GetAttachmentType();

AttachmentResponseShapeType shape = new AttachmentResponseShapeType();
shape.IncludeMimeContent = true;
shape.IncludeMimeContentSpecified = true;
getAttachment.AttachmentShape = shape;

Listids = new List();

foreach (AttachmentType attachment in attachments)
{
if (attachment.ContentType == "application/pdf")
{
FileAttachmentType pdf = (FileAttachmentType)attachment;
Logger.WriteLog("PDF found: " + pdf.Name);
AttachmentIdType id = new AttachmentIdType();
id.Id = pdf.AttachmentId.Id;
ids.Add(id);
}
}

getAttachment.AttachmentIds = ids.ToArray();
Logger.WriteLog("Requesting PDFs (" + ids.Count + ")");
GetAttachmentResponseType response = ServiceBinding.GetAttachment(getAttachment);

foreach (AttachmentInfoResponseMessageType attachmentInfo in response.ResponseMessages.Items)
{
// Ensure attachment is File
if (attachmentInfo.Attachments[0].GetType().Name == "FileAttachmentType")
{
FileAttachmentType file = (FileAttachmentType)attachmentInfo.Attachments[0];
if (file.ContentType == "application/pdf")
{
string attachmentName = attachmentInfo.Attachments[0].Name.Substring(0, attachmentInfo.Attachments[0].Name.LastIndexOf('.'));
SaveFile(file.Content, attachmentName);
}
}
}
}

Voila! You’ve successfully retrieved and downloaded pdf attachments from all incoming email that you’ve subscribed to. Hopefully these two posts have been useful to you if you’re trying to set up a connection to and Exchange server and still back on 2007. I did skim over a few area – GetFolderByPath comes to mind – and I plan on addressing that in a later post.