Av: Filip Bentzer
2015-11-09
Awesome days at Øredev
Iam sitting on the flight back from Öredev, the developer conference in Malmö. It is a rather big conference with a lot of different tracks, eight or ten different speeches in parallel. Added to that is the keynotes and the booths in the hallways. The food that is served (breakfast, lunch and dinner) was good but nothing to write home about. The funniest thing in the hallways was on the Wednesday. The Öredev conference was offering, free of charge, the possibility to have a NFC chip implanted in the hand. Almost 70 people took up on the offer and became cyborgs… I skipped it since I could not figure out what to use it for. Apparently it was not possible to use it to replace the bus card in in Stockholm (“SL-kortet”).
Pretty much all of the speeches I visited held a very high standard, an opinion held by all people I talked to. Not scientific in any way but an indication of the quality of the conference.
Ok, so what did people talk about? Well, first of all Micro services and everything related to that (Docker, Continuous integration, testing…) then they talked about event sourcing and lots and lots of data. Javascript libraries is still a very hot topic. “Cloud” stuff is just a normal part of the developers’ world and nothing to talk specifically about, but it is always there. The company ipeer had a booth and they had done one of the nicest ways to display their offerings, they had created custom Lego kits that they handed out. Awesome!
The customer that I am at right now has a slight problem with the test setups. They have way too much reliance on manual testing and we need to do something about that. So I went to bunch of testing speeches just to get some inspiration.
The first speech I went to was Christin Wiedemann. She pointed out that nothing much has happened within software testing in the last 10 or so years. The last things that happened was the agile/XP movements that tries to move the testing closer to developers and make developers do automatic tests. Since then… not so much. She had no real ideas but tried to start a discussion. Well, perhaps in a non-Nordic country that could have worked. I still thinks that she pointed on something. The tools has become slightly better but not that much…
The speech, Ten failed forecasting plan assumptions by Troy Magennis was very entertaining and he pointed out some very good points (but not really ten of them):
1. Missed start date. If you start late it doesn’t matter if you did a perfect estimate. You will still be late… Nothing to do here except manage the expectations.
1. The team is missing, is understaffed or doesn’t have the proper competence. If the estimate is for 5 highly skilled Java-developers and you get 3 .Net developers direct from school you will have problem, even if the developers you get are super good. One way to handle this is to set up a matrix, in this column we have what we need, in the other column we have what we have.
1. Dependencies. If you team is depending on another team that in turn is depending on another team and so on. The larger chain the larger chance of at least one of these teams has been delayed. And that will make all teams delayed. Because dependencies are so tricky to handle he actually recommended larger teams rather than two smaller teams that depends on each other (but no if they are independent).
1. Technical debt. If the team has to spend a lot of time fixing old sins they will not have time to do new stuff.
1. Ship stoppers. If there constantly is things that hinders the deployment. Things that are not tested until the code reaches staging or production environments but stops the deployment.
The reason there is only five points and not ten is that he had split point to into several parts and I left one point out. He had one very interesting tip though, and that was to list the assumptions that you made when you made the estimates, like number of people involved, their skillset, the complexity of existing code, dependencies etc. If any of these assumptions aren’t correct anymore then you know for sure that your estimate now is wrong and you might need to do something about that. Tell people in charge, if nothing else…
*Ashley Grant* held a speech about Aurelia a new Single Page Application framework. It looked very nice and if you have a background in Asp.Net Forms development you will feel at home directly. I mean that in a good way because they seemed to have removed all the bad parts of web forms and made it good (mostly by not having to do post-backs, if nothing else). Check it out if you are into front end development.
*Rachel Reese* held a speech about Patterns and practices for real world event driven micro services. She is working for Jet.com, the fourth largest e-commerce site in the sites. Hmm, I think the other three was Amazon, Walmart and Apple? Anyway, reeeeeaaly lots of traffic. They are using in the order of 350+ different micro services mostly written in F#. I am really going to look into F# it allows for some seriously compact and well-structured code that super easy to understand. Her tips to make it work was:
1. Use tools to handle the life-cycle of the service. Build your own or use an existing but don’t let each service do that by them self. You are basically moving all the complexity out of the code into the infrastructure. You will need help to handle that.
1. Don’t abstract. First this seems very wrong to me but she had a point. A micro service is very tiny and has a single responsibility, a single purpose to exist. If you are going to use, say, Elasticsearch to handle searching, don’t hide it behind a generic search interface within the service. If you do that you will not be able to use all the features and functions of Elasticsearch to its fullest. If you throw out Elastic search it should be just as easy to throw out your minimal micro service with it and build a new one, super adapted to the new search engine.
1. Be functional. Use f#. I will seriously look into this.
1. Isolate side effects. If you have a service that create a new user, do not let it send out a welcome letter as well. You might need to recreate all the users for whatever reason. In that case you do not want to resend all the welcome letters.
1. Use event sourcing, they are using Kafka from Apache. Super cool stuff. And you are going to needs stuff like that if you are going to push the throughput to really high levels.
*Nehe Narvala* had a rather boring speech about *Splitting and Replicating Data for Fast Transactions*. It was boring because she was mostly talking about CAP and ACID but at the end she pointed out something really interesting. Today, if you want to scale data storage (really big), you basically have to use eventual consistence. But there is work, from Google and others, to create a globally distributed high performance ACID compliant database. Bank applications on a global level… A long way to go but we will be there eventually.
*Fiona Charles* speech *Some models are useful* was interesting but a bit to advanced for me. It was about test models and test plans. But if you are interested in advanced test planning, this is the person to check out.
*Gojko Adzic’s* speech *Turning continuous delivery into business advantage* was very entertaining. His point was that continuous delivery is very much a solution to a technical problem but it has potentially bad consequences for business and users. His short list of things not to do was:
1. Do not confuse the users.
1. Do not interrupt the users work or sessions.
1. Do not disrupt or prevent marketing initiatives.
The main way to solve this is to realize that deployment is not the same thing as release. You can deploy something, even let some users use it to validate its functionality and usefulness. The release is then a pure marketing event. The way to do that is to version everything. Data structures, Javascript files, CSS files, API interfaces and so on. Then you control the version on the user level. Then you can have A/B tests, let the users from Sweden have one version while the Danish user have another. You can test a new, uncertain, feature on 2% of the users. He called this to multi-version the data and went so far as doing continuous integration without multi versioning is irresponsible.
*Martin Kleppman* had a nice talk about having lots of data in many different databases. The basic structure when you have micro services that you let each service has its own database. That is fine until you get something like a cache. A cache needs data from a lot of different sources and it will not be the owner of any of it. So how do we ensure that it contains correct and up-to-date information? His solution is to put all data into Apache Kafka. Kafka is a queue that you can only write in the end and you can only read in one direction. So you put all you input into Kafka and then you let all your others sources read from it. If you need to set up a new database or recreate an old one you simply re-reads the Kafka log from the beginning. By doing this you can handle enormous amount of data very fast.
I did listen to a couple more speeches but this has to do for now. A lot of the speeches are available, for now, on the web site. This one was very good.
I can really recommend that you go, if you get the opportunity, it was very fun and interesting!