Editorials - tsgrp/hpi GitHub Wiki

Copy Pasta - A TSG Tale

By: Joe Hof

I saw something recently which lets me illustrate why we should be really careful when copying code and doing something I call "fixing the symptom and not the problem." This is all code from our repo and live in trunk. (well, it was in trunk, it is now fixed)

Commit One

tableview.js

pagingInfo.pageSize = pagingInfo.pageSizep < 1 ? window.resultsPerPage : pagingInfo.pageSize;

Important note: pageSizep is NOT defined anywhere else in the HPI code base, it is simply a typo.

This code was added to tableview about a year ago, and was also code reviewed. The line was highlighted in the compare. In and of itself the typo isn't a HUGE deal, but then copy pasta happens.

Commit Two

thumbnailview.js

Thumbnailview now needs some new paging logic, so we go to tableview which already pages, great! We copy over lots of stuff including the line from above:

pagingInfo.pageSize = pagingInfo.pageSizep < 1 ? window.resultsPerPage : pagingInfo.pageSize;

So now we have bad code in two places, but wait! It gets better.

Commit Three

tableview.js and thumbnailview.js

Huzzah! We realized that this pagingInfo.pageSizep variable is ALWAYS undefined in the code, because, you know, it is a typo and doesn't exist. We are going to fix it right?

pagingInfo.pageSize = (_.isUndefined(pagingInfo.pageSizep) || pagingInfo.pageSizep < 1)  ? window.resultsPerPage : pagingInfo.pageSize;

This is classic "fixing the symptom and not the problem." Instead of critically looking at the line and asking WHY, why is pageSizep undefined (which is obvious), we just slap an undefined check on it and move along.

I really just wanted to write this up to show how easily bad code can 1) get copied around and 2) get even worse. So when you are copying code, look at the code you are copying, it's going to have your name on the blame in the SVN. And when fixing a problem, look at the code and ask WHY the problem is happening first before just making the error no longer occur with an if check.

The Case Against Knockout

By: Max Stein

Sometimes you fall so head over heels in love with someone, you don't see anything but good. Their flaws are staring you in the face, and you ignore them. No matter what your friends or internet blog posts tell you, you still believe you can fix them, you'll be the one to make it work.

Then, all of a sudden, it ends. The magic's gone, and what used to be effortless and simple, becomes complicated. So you both decide maybe it's time to take a break, see other people. That's what seals the deal, there are so many other frameworks out in the world, why would you tie yourself to just one?

That's when you meet her, so easy going, not overly opinionated... and she's willing to work intimately with other frameworks. (Even your Ex)

Knockout was my first love. The first months were incredible, the simplicity that knockout bindings bring to building powerful web interfaces was awesome. For small projects and I still think nothing is as powerful and easy to use. The trouble comes when application scope increases. As soon as you want to start reusing components and extending and tweak functionality from here to there.

Not saying that it can't be done with knockout. The trouble is, knockout abstracts away so much for the sake of simplicity that it can be easy to make poor design decisions that don't scale, especially if the developer doesn't understand how observable and rendering work under the covers. What I found was that using Backbone and jQuery, following a small set of design patterns (Thank you backbone.layout.manger), was much more powerful than knockout alone.

If you do any research trying to compare Backbone vs. Knockout you’ll find a lot of mean responses informing you that they are apples and oranges. Backbone is application framework, while knockout is View/ViewModel binding framework. You’ll also find several libraries that try and bridge the gap (we tried knockback) The trouble is these frameworks only add another layer of complexity by binding your backbone models to your knockout viewmodels. So now instead of just abstraction of the connection between your view and viewmodel, you also have a third layer between your viewmodel and model. That’s a lot of places for things to go wrong, and a lot of black boxes your developers will be afraid to open. If I’m going to use backbone and knockout together, I’m going to live with the extra few lines of code where I can see that I’m manually copying an observable to my model, or a vice versa.

People will argue that knockout truly shines when it comes to forms and user input, and I won’t argue that, the ability to seamlessly connect a input and a javascript variable in real time, with 1 line of code and a attribute in your markup is incredible. This would take 5 or 6 lines of jquery and you’ll have to deal with subscription and cleanup of event handlers, which if you have a form with 10 controls can really add up. Still, at the end of the day, I’ve found that leveraging backbone’s events system helps alleviate some of the hassle, and the rest I can live with. If anything I enjoy that all my code is in the JS file and I know immediately what is going on without having to check the markup for data-bind attributes for the other half of the story.

Lastly a word about Unit Testing. This is the what clinched the deal for me. I found that often code written for knockout, especially in complicated modules, was very hard to unit test because often you had to instantiate the viewmodel before you could test one function on it. The way backbone events/functions are wired in with each function hanging off the view means that I can initialize the view, and test a specific function very easily. You could argue that viewmodels could work the same way if you designed them well from the start, and I would agree; however, I’ve found that it’s more natural to end up with a well designed view in backbone then it is a well designed viewmodel in knockout.

That is the bottom line. Knockout makes it too easy to get something working with badly designed code that will turn out not to scale well or be hard to maintain. Whereas, with a backbone/jquery approach, yes it’s more code, but you also have to spend more time thinking about it, which creates a better more sound product.

Trac to the Future

By: Max Stein

Editorial Note (GGS - Nov 2014): The concept of HPI Forms and the "One OTC To Rule Them All" was successfully implemented as of HPI 2.3

Tracs, the most notorious of HPI concepts dates back to the beginning. The goal was to provide different 'perspectives' (You heard it here first) to the same data or rather the docbase as a whole. Allowing different user groups to see different folders/document types and have different views and actions that made the sense for the way they worked.

Several attempts have been made to improve the functionality, comprehensibility and functionality of tracs, none have been very successful. With the most recent implementation arguably being foggier than the original. Despite this, I think we've made a few significant improvements worth building on.

  1. The ability to share subconfigs like OTC and Stage. It makes sense that two people could take a similar 'perspective' and need the same OTC, and search, but slightly different actions on the stage
  2. Multiple types per trac.
  3. Trac Chooser that let's users pick between the tracs they have access to.
  4. Permission based trac selection. Using permissions instead of a manual config/group memebership intersection is 1000x faster.

At the same time, there are few things we don't do well:

  1. If your type is not on the OTC for your your track all hell breaks loose.
  2. OTCs contain some information that would be better stored elsewhere. An attribute can only have control type, so that means if you want a dropdown on search and a textbox on create you're out of luck.

And here are few things that have tripped us up in the past:

  1. Performance: This is typically seen trac resolution. It is almost always group based and when we manually get every possible group the user is in and cross it with every group that's allowed access and deal with subgroups, it becomes a nightmare. When designing anything that requires a nested for loop(s) always consider the performance hit if one of those collections becomes enormous
  2. Cross Tracing. Sometimes clients think they want to move from one perspective to another. We've accommodated this in the past with a trac hopping system based on object type. This has proven to have limitations and lead to a confusion user experience. I don't have an answer to this, but I think making users more aware of when there is a perspective change, might help. This goes back to using the trac chooser.

Looking forward, I have some thoughts on how we could improve our trac a approach.

  1. A single appwide OTC that only contains information that can be pulled from the datadictionary (no more controltypes or picklist,etc)
  2. Tracs would have a way to override select pieces of the default OTC (that way, any place the otc is being requested would be able to know right way if it should use the override, or fall back on the default. Right now we have some places that loop through every possible OTC looking for a type)
  3. There would be a new config called FormConfigSet this is a collection of Form Configs. So let's say you had a FORMSET called SEARCH. This formset would have form definitions for the search page. Each form would be for a different objecttype. So you would create one form for qa_sop and select which properties from the qa_sop object on the OTC you wanted to include, what controltype each property should use, what picklists/validation/etc. Then in other sections of the admin, anywhere you need a form, you choose a formset instead. So search you choose the SEARCH formset, in addDocument you choose the ADD formset. In EditProperties and BulkUpload you choose the GENERIC formset. You can create SEARCH_ACCOUNTING just for the accounting trac, or share it amongst a few. You get the drift. This not only clears up some of the current issues with duplicate otcs, but would simplify all the places in the admin where we're creating maps of property lists.

If we play our cards right, these formset configs could be repurposed as the new wizard admin.