Sunday 7 February 2016

LCA 2016 - Day 5


Last day of conference! This is generally considered to be the wind-down - where you acknowledge that your brain is probably too full to absorb anything really new. Think light dessert after a big meal.

Today's keynote was from Genevieve Bell. She started her talk by saying "I know I'm in Australia when I go to a conference that has a raffle."

Genevieve's talk was easily the most entertaining of the keynotes. She is an anthropologist that works for Intel. She was hired by Intel to help them understand two groups of people:

 - Women
 - ROW (Rest Of World - ie anything not American)

Describing herself as both an unreconstructed Marxist and a radical feminist, Genevieve discussed how we as a open-source community have a moral obligation to make a better world. There are a number of benefits to the open source paradigm, including facilitating innovation, sharing and re-use. The ‘open’ paradigm is increasingly extending to other areas such as open government, open culture, open health and open education.




5/1 - The eChronos Real-Time Operating System - Just what you want, when you want it by Stefan Götz


I've never worked with an RTOS before, so this workshop was a baptism of fire for me. I setup the emulator on my native OS rather than a vm, which leaves me wondering how much stuff I've broken that will need to be fixed. :-|

With not a lot of personal background I was still able to come out of the workshop with an appreciation of just how much is involved with an RTOS and the utility that eChronos offers. It also makes me appreciate just how much utility is sacrificed by including features into an OS that we could easily do without.

Since it was in the same room, I attended the Home Automation BOFS (Birds of a Feather) during lunch. Home automation is a nice idea and I have a level of admiration for those who pursue it, however the cost and the level of work required just to have programmable central lighting control and having a graphical display of your water usage is not worth it IMHO.

5/2 - Free as in cheap gadgets: the ESP8266 by Angus Gratton


After lunch I went to two embedded Linux talks. The first one was for the ESP8266 which is essentially a super-cheap wifi module that can be easily be connected to an Arduino or Raspberry Pi. So if you want your hardware project to talk wifi - this is the unit to get.

Angus covered its benefits and disadvantages and gave pointers for those wanting to work with this unit.

5/3 - Raspberry Pi Hacks by Ruth Suehle



Ruth is the co-author of the book of the same name as the talk. It re-whet my appetite to get one, particular when she described some of the projects people had completed with the Pi.

Closing Session - Lightning Talks


The closing session includes the five minute lightning talks. Although short, the speakers manage to put a lot into these punchy talks.
  • Steven Ellis - "A call to ARMS" was a pun plugging the ARM series of processors for NZOSS.
  • Geordie Millar - Explained is stackptr project: OS GPS map sharing project.
  • Katie McLaughlin - Discussed the #hatrack project
  • Christopher Neugebauer  - Plugged the pyconaustralia in Aug 2016. There was also a plug for kiwipycon.
  • Cherie Ellis - Plugged Govhack 2016
  • Bron Gondwana - Discussed using JMAP as a better way to do email.
  • Martin Krafft - Discussed the curse of NIH and emphasised "Do one thing. Do it well."
  • Keith Packard - Demonstrated his low-cost random number generator that hooks into /dev/random

The conference finished with an impromptu performance entitled "I lied about being a Linux type".

Overall, this was a great conference! I have learnt a great deal from it and I look forward to the next one. I would recommend anyone with an interest in Linux or open source software to attend.

Saturday 6 February 2016

LCA 2016 - Day 4

Day 4 opened with a keynote by Jono Bacon, director of community at Github. Jono spoke of the evolution of the Open Source and Linux communities moving towards what he called "Community 3.0" where the expectations of open-source infiltrate into society at large and become part of the "common core" of society. He stated that dignity is a fundamental human requirement and right and that dignity is a product of several factors:

  • Dignity, requires
  • Self Respect, which stems from a persons ability to
  • Contribute, which requires
  • Access

Jono described system 1 and system 2 thinking and outlined the SCARF model:

  • Status
  • Certainty
  • Autonomy
  • Relatedness
  • Fairness

The two golden rules are:

  1. Accomplish goals indirectly
  2. Influence behaviour with small actions

Community 3.0 = System 1&2 thinking + Behavioural patterns + Workflow + Experiences + Packaged guidance

I guess it goes without saying that I got a lot out of this keynote.

Day 4 also saw a marked improvement in the quality of the food offerings at morning tea. I think I ate 5 or 6 of these delectable goodies. I must learn to make them at home.




4/1 - Using Persistent Memory for Fun and Profit by Matthew Wilcox


The title of this talk sounded interesting, but I quickly worked out that there was very little I could gain from this. Persistant memory is memory that retains its state after powering off. Matthew works for Intel and they just so happen to be about to release 3D XPoint DIMMs that do this - however they will be expensive.

Applications must be written to take advantage of persistent memory - hence the need for intel to encourage developers to do so.

I couldn't help the feeling of deja vu with this talk. Persistent memory used to be a common thing: The PDP11 had it with core memory, my MicroBee had it with CMOS memory. We have come full circle.

4/2 - Hardware and Software Architecture of The Machine by Keith Packard


Another vendor talk, this one from Hewlett-Packard. This talk focused on The Machine - which I had never heard of, but apparently a lot of the delegates had.

Much of this talk was dedicated to the challenge of dealing with 320TB of RAm shared amongst several processors. The handle this a new paradigm was developed where memory is addressed in "books" instead of pages stored in "shelves". Memory is made available by the "Librarian".

In order to support the architecture of the machine, Linux needs to be modded to support:
  • Fabric attached Memory
  • File system abstractions
  • Librarian file system

4/3 - Tutorial: Hunting Linux malware for fun and $flags by Marc-Etienne M.Léveillé


After lunch was a gruelling workshop where each participant was given a virtual machine infected with malware with the instructions to detect and defuse it and see how many 'flags' we could capture. Somehow we were meant to do this while listening to his talk.

These sort of workshops are generally bad for my ego. I like to think I'm pretty good at this sort of stuff, but once you're shoved in a room full of people as good as or better than you, you start to feel like a clueless noob. I eventually captured five flags of the ten available flags but the malware was still persistent on my machine and I had to resort to the cheat notes. This is where I found out that the email sending was made persistent through ssl injection.


I would have liked to have more time to study and understand the mechanisms. This was certainly a valuable tutorial with direct application to the real world.

4/4 - edlib - because one more editor is never enough by Neil Brown


While admitting that the last thing Linux needs is another editor, Neil explained his justification for doing so. He described the deficiencies of current editors from the Model-View-Controller perspective and detailed how his new editor aimed to overcome them. It was enough to make me wish it wasn't in alpha.

https://github.com/neilbrown/edlib


4/5 - Playing to lose: making sensible security decisions by assuming the worst by Tom Eastman


In a classic case of leave the best 'til last, Tom described how security is enhanced by assuming the worst. He started by describing the potential threats:
  • Script kiddies, all the time in the world, in it for the lulz
  • Organised criminals
  • Former employees (top threat)
  • Hacktivists
  • Nation-state actors

Tom then went on to explore each of the 'attack surfaces' of an online presence in detail:
  • Web server
  • App server
  • Database
  • Front-end interface
  • Infrastructure

I took a several pages of notes from this excellent talk. His key recommendations are:
  • White-list input validation on all user-generated input
  • Escape all data appropriately for display
  • Mitigate cross-site scripting using Content Security Policy. Key: ensure inline javascript is never executed.
  • Log and check CSP violation reports. 

Friday 5 February 2016

LCA 2016 - Day 3

With the intensity of the Miniconfs over, the conference settled into the streams. This is where people chop and change to whatever talk appeals to them the most. In my case I concentrated on the security topics and hands-on workshops.

The day began with the second keynote speaker for the week (Catarina Mota) who spoke on the topic "Life is better with Open Source". Good talk, but not as good as yesterday's. Her main emphasis was on open-sourced hardware.

3/1- Using Linux features to make a hacker's life hard by Kayne Naughton

Kayne's talk emphasised the increase of Advanced Persistent Threats (APT) which following a distinct pattern of infiltration:
  1. Reconnaissance
  2. Weaponisation
  3. Delivery
  4. Exploitation
  5. Installation
  6. Command and Control
  7. Actions on Objectives

Successful APTs may continue for years if undetected. The six D's of mitigation are:
  1. Detect
  2. Deny
  3. Disrupt
  4. Degrade
  5. Deceive
  6. Destroy

Kayne discussed each of the steps in detail with examples.

3/2 - How To Write A Linux Security Module That Makes Sense For You by Casey Schaufler


The second security talk was highly specialised and targeted towards kernel module developers. Since I am unlikely to write a kernel module in the near future, this was more an information session for me. However I did learn the difference between major and minor security modules.

After lunch I dived into the first of two double-session workshops.

3/3 - Identity Management with FreeIPA by Fraser Tweedale


The first workshop was on FreeIPA. During the workshop we got to:
- Install a FreeIPA server and replica
- Enrol client machines in the domain
- Create and administer users
- Manage host-based access control (HBAC) policies
- Issue X.509 certificates for network services
- Configure a web server to use FreeIPA for user authentication and
access control

It's definitely preferable to using Active Directory or OpenLDAP or (shudder) NIS.

During the workshop we used vagrant with virtualbox. I had never used Vagrant before and was very impressed. The workshop listed Federation as one of the objectives, but we didn't have time to cover that.

I wouldn't class FreeIPA as 'true' Identity Management as it doesn't support connectors, data pumping or password synch - however it certainly does replication and federation, so that's a big plus.

3/4 -  Packets don't lie: how can you use tcpdump/tshark (wireshark) to prove your point. by Sergey Guzenkov


The final workshop of the day was on wireshark. Now I've been using wireshark for years, so I was looking forward to something I had not seen before. I wasn't disappointed.

It was almost impossible to keep up with the lightning pace of this workshop. We quickly covered the basics of wireshark and tcpdump and launched straight into capturing SSL keys and decrypting SSL packets.

We also covered many of the little used switches on both tshark and tcpdump and how they can be used to generate statistics for traffic reports. We also used mergecap, capinfos and dumpcap tools.

LCA 2016 - Day 2

Day 2 of LCA2016 kicked off with the first of four keynotes for the conference delivered by George Fong, President of Internet Australia. He surprised everyone by actually giving a keynote rather than a shameless sponsor promotion. The keynote was entitled "The Cavalry's not coming... We are the Cavalry" which was subtitled "The challenges of the Changing Social Significance of the Nerd."

The main thrust of the keynote was highlighting the insatiable greed for control over technology as exhibited by Governments and legislators - particular when they have little to no understanding of the technologies they are trying to legislate. Particularly damaging is Governments desire to hamstring encryption technology and impose export controls on intangibles and its effect on open source. George emphasised the need to communicate technical concepts to lay people in language they can understand.

Day 2 was also the second day on the miniconfs. For me, that meant the sysadmin miniconf. This one did not have the structure exhibited by he opencloud symposium - each talk was an island of knowledge and there were thicknesses and thinnesses. The talks were also shorter - meaning more of them. The sysadmin miniconf has its own page, so you could ignore this blog entry completely and go there.

 

2/1 Is that a data-center in your pocket? by Steven Ellis

Subtitled "There will be dragons" rather than the predictable "...or are you pleased to see me."
Steven provided a walk-through on how to create a portable, virtualised cloud infrastructure for demo, training and development purposes. This talk was heavy on detail and I found myself wanting to explore this more at a later date. He utilised a USB3 attached SSD drive connected to an ARM Pine64. The setup utilised nested virtualisation, thin LVM and docker.

According to Steve, the "cloud" will very soon be mostly ARM64 - so it's time to prepare for that. He also demonstrated how UEFI can be used to secure boot virtual machines.


Martin highlighted the fact that in the transition from Unix to Linux, somehow we forgot the habits born from Unix administration - in particular, we forgot about system automation, to whit:

  1. Monitoring
  2. Data collection
  3. Policy enforcement
Martin worked through scripts available at https://github.com/madduck/retrans

2/3 A Gentle Introduction to Ceph by Tim Serong (Suse) 

I didn't get a lot out of this talk, other than becoming aware that Ceph was a filesystem popular with OpenCloud. His slides are here.

2/4 Keeping Pinterest Running by Joe Gordon

Joe talked about the challenges and differences in supporting a service as opposed to supporting a piece of software. His basic description is that it's like changing tyres whilst driving at 100MPH. The differences include:
  • stable branches
  • no drivers and configurations
  • no support matrix
  • dependency versions
  • dev support their own service
  • testing against prod traffic
One thing that really interested me is their use of a "Run Book" for the on-call support team. All recent changes are documented in the Run Book against anything it could potentially affect and who to contact about those changes. If on-call support has to respond to a problem, they consult the Run Book first.

In addition to a staging environment, they also have what they call the "canary" environment - akin to the canary in a coal mine metaphor. However, Joe said it was more akin to a rabbit in a sarin gas plant metaphor (insert chuckles).

Their dev->prod cycle looks like:
dev->staging->canary->prod

The staging system uses dark traffic, however the canary system operates of a minimal set of live traffic. If problems occur at any point, they rollback and conduct a blameless post-mortem. Joe emphasised that the blameless component was the most critical.

Before deployment, they conduct a pre-mortem covering:


- Dependencies
- Define an SLA
- Alerting
- Capacity Planning
- Testing
- On call rotation
- Decider to turn feature off if needed
- Incremental launch plan
- Rate limiting 

Tammy's central message was on developing self-healing systems through scripting and auto-remediation. For everything you think of that can go wrong, rather than just logging and crashing, run a script to fix the problem. The motto for their team is KTLO - Keep the lights on.

She also emphasised the need for a "Captain's Log" - which is a log of every on-call alert. Also for cross team disaster recovery testing.

2/6 Network Performance Tuning by Jamie Bainbridge

This talk was more an in-depth tutorial on how to tune network performance of your system as well as diagnose any problems due to the network. It was quite fast-paced, his slides are here.

2/7 'Can you hear me now?' Networking for containers by Jay Coles

I felt a little lost in this talk. This was part 3/3 in a series of talks by Jay on containerisation. As mentioned before, I have neglected containers - something I need to remediate as it seems everyone has embraced them. Much of the material for this talk is available here.

2/8 Pingbeat: y'know, for pings! by Joshua Rich

This was a great talk! Josh gave quick overview of ICMP ping and then introduced Pingbeat, a small open-source program written in Go that can be used to record pings to hundreds or thousands of hosts on a network.
Pingbeats power lies in its ability to write the ping response to Elasticsearch, an open-source NoSQL-like data-store with powerful, built-in search and analytics. Combined with Kibana, a web-based front-end to Elasticsearch, you get an interactive interface to track, search and visualise your network health in near real-time.
Being the ninth talk of the day, I kinda snoozed in this one. I didn't find it particularly useful or interesting hearing about the challenges of supporting an IT system used by research scientists.
Grafana is an open source web charts dashboard. It can be configured to use a variety of backend data stores.

Andrew gave a live install, config and run demonstration of Grafana, starting from a fresh Ubuntu 14 VM with Docker (again!) where he installed and setup Graphite using Carbon to log both host CPU resources and MQTT feeds and created a custom dashboard to suit.
This was another talk on what it's like to support a continuously available service. Things to gain from this included:


- Fixed time iterations
- Plan the scope the known work for the next 2-3 weeks
- Leave sufficient slack for urgent work
- Be realistic

Interruptions:
- Assign team members to dev teams
- Have a rotating “ops goal keeper” with a day pager who is free from other work
- Have developers on pager as well. This helps in closing the feedback loop so that they are aware of issues in production

2/12 From Commit to Cloud by Daniel Hall

This talk was focused on leveraging the benefits of microinstances when managing cloud based services and infrastructure. Deployments should be:


- Fast (10 minutes)
- Small (ideally a single commit, aware of whole change)
- Easy (as little human involvement as possible, minimise context switching, simple to understand)

This leaves less to break, easier rollbacks and allows the dev team to focus on just one thing at a time rather than a multitude of tracked changes. The basic idea is that deployments should be frequent and nobody should be afraid to deploy.

In the setup Daniel works with, they have:

- 30 separate microservices
- 88 docker machines across 15 workers
- 7 deployments to prod each working day!
- Only 4 rollbacks in 1.5 years

Their deployment steps are:
  1. write some code
  2. push to git reporting, build app
  3. automated tests run
  4. app is packaged
  5. deployed to staging
  6. test in staging
  7. approve for prod (single click)
  8. deploy to production

2/13 LNAV

The final talk was a 10 minute adhoc one discussing lnav as a replacement to using tail -f /var/log/syslog when looking at the systemlog. I am fully converted to this tool and will be using it everywhere from now one. It uses static libraries, so you can simply copy it from one system to another as a standalone binary.





Tuesday 2 February 2016

LCA 2016 - Day 1

It's not often that a Linux Conference is held within reasonable commuting distance of my home, so it was too good an opportunity to pass up. A train trip to Footscray, transfer to a V-Line to Geelong and I'm there!

Like all conferences, the first session of the first day covers the intro and the boring stuff - like where to get first aid if your leg comes off, where the toilets are and where to get food. The last two items I paid attention to.

Next was morning tea, which meant a mad scramble to find a seat near a working power point for my notebook. Think musical chairs, but with geeks and higher stakes.

The first two days of the Conference are devoted to "mini-conferences". I was mildly interested in the kernel mini-conference, however it was being held in the Wool Museum - which is off campus. I'm not THAT interested in it to make the trek there, so I settled on the Cloud Symposium.

I was impressed. The last time I looked deeply into the Linux cloud deployments was in 2007 when /dev/kvm was being merged into the 2.6.20 kernel. I quickly realised I had neglected this area of development to my chagrin and although the mini-conference was divided into separate presentations by different speakers, it all coalesced into a coherent picture.

The content of each of the presentations were full of detail. By lunchtime, my brain was mush - and there were still five more presentations to go.

The Overview

If there is one thing you can take out of the Cloud Symposium is that there is a strong push to manage infrastructure as if it was code by making it immutable. Basically:

1) Treat your infrastructure like cattle, not pets. Never modify infrastructure, slaughter it on a regular basis.
2) Document first. Use the documentation to create infrastructure containers. This makes the process repeatable.
3) Execute apps as one or more stateless processes.
4) Repeatability, Reliability, Resiliency.
5) Automate the development lifecycle - this extends to PAAS - People-As-A-Service.
6) Create a DevOps culture in the organisation.

The latter I find particularly interesting as I have encountered the converse many times. Usually you have separate non-intersecting groups of developers (or implementers) and operators (or support). This creates an undistributed middle where once the project is committed, dev figures their job is done. What happens is a détente something like:

op: This app keeps crashing, you need to fix it.
dev: It works fine in the dev environment, it must be a problem with ops.
op: Sure it works fine for a single user with a 2GB database, but with 300 users and a 1.5TB db it keeps falling over.
dev: Well, how am I supposed to debug it if you don't let me work on the live environment?
op: There's no way I'm letting you near the live environment, you already produce crappy code!

Enter DevOps, with a separate team devoted to the deployment process and aimed at establishing a culture and environment where building, test and release can happen rapidly, frequently, and more reliably.

The specific talks focused on individual aspects of this process and culture, including the psychology involved. There was a strong emphasis that an organisations processes will reflect its structure and communication patterns - if an organisation is compartmentalised in its thinking with little knowledge sharing, then the processes will also share in this deficiency.

Now to the specific talks.


1/1: Continuous Delivery using blue-green deployments and immutable infrastructure by Ruben Rubio.
The traditional dev model introduces risk due to it encouraging or permitting the following as part of ongoing development:

 - Workarounds during upgrade
 - Different people performing upgrade
 - Lack of continuous documentation.

As part of following a blue/green deployment, create an environment where the Infrastructure is immutable and only the data is mutable through the use of containers.

During upgrades:
  - Never modify the infrastructure
  - Recreate everything that is not data.


This makes Rollback easy and avoids configuration drift.It also means updated and accurate infrastructure documentation.


1/2 The Twelve-Factor Container by Casey West

The second talk dove-tailed nicely with the first by codifying 12 factors with best practices for container deployments:

1: One codebase tracked in revision control, many versions
Best Practice: use the environment and/or feature flags. Use devops.

2: Explicitly declare and isolate dependencies
Best Practice: Depend upon base images for default filesystem and runtimes
3: Store configuration in environment
Best Practice: Use environment variables, not config files

4: Treating data as local
Best Practice: Connect to network attached services using connection info from the environment.
5: Strictly separate build and run stages

Best Practice: Build immutable images and the run those images
Best Practice: Lifecycle - Build, Run, Destroy

6: Execute the app as one or more stateless processes
BP: Schedule LRPs by distributing them across a cluster of physical hardware.


7: Export services via port binding, don't make assumptions about addresses or ports.

8: Scale out horizontally by adding instances

9: Maximise robustness with fast startup and graceful shutdown

10: Keep dev, staging and prod as similar as possible.
Best Practice: Run containers in dev.

11: Treat logs as event streams
Best Practice: Log to stdout

12: Run admin/management tasks as one-offs

BP: Reuse application images with specific entry points for tasks 

The mantra:
  • Repeatability
  • Reliability
  • Resiliency
All this before lunch. The rest of Day 1 is continued here.

LCA 2016 - Day 1 (Part 2)

After Lunch, the talks got stuck into the security of cloud-based deployments.


1/3: Assorted Security Topics in Open Cloud: Overview of Advanced Threats, 2015’s Significant Vulnerabilities and Lessons, and Advancements in OpenStack Trusted Computing and Hadoop Encryption by Jason Cohen

Jason works for Intel, so his content was focused on Intel-based solutions. He commenced his talk with a quick summary of recent security vulnerabilities:
  • openssh
  • keyring exploitation - SELinux protected
  • venom:QEMU vulnerability, exploited flaw in virtual floppy controller
His message from this is to assume front-end defences are bypassed, but don't neglect the perimeter. He then discussed APT - Advanced Persistent Threats using the US DoD drone hijack as a case study in spear fishing and covert exfiltration. He described the lifecycle of an APT as follows:

 - recon
 - intrusion
 - backdoor
 - credential acquisition
 - utility installation
 - privilege escalation/lateral movement/data exfiltration
 - persistence

Defences
  • Active Patch management
  • cloud technologies
  • Separation of roles
  • security automation
  • user security training and enforcement
  • protect the data. Encryption and key management
  • physical security
Jason then went into the deepest technical explanation of TPM and iTXT I've ever seen, covering it's development history and current state of TPM 2.4. He then discussed the build process for Open Attestation Server. I managed to find similar slides here. Quite frankly, he lost me here although I took copious notes. 


1/4: Managing Infrastructure as Code by Allan Shone

Allan Shone from Manageacloud provided a tour of techniques and tools for managing cloud and bare-metal infrastructure as though it was code. Naturally, manageacloud did quite well in his analysis. He started with a little history. 

In the 'past', everything was manual, hostnames were based on function, documentation was fragmented and unmanageable consisting of text files, wikis, spreadsheets, shared drives, proprietary software solutions and shell scripts. The mess was difficult to keep up to date. There were no recent status indicators, interfaces were cumbersome and proprietary. Keeping up to date was a time consuming mess.

I say 'past', because this is frequently the reality I see.

The priorities (according to Allan) are:

Ideas: versioning, easier interface, provision of host state
First: Orchestration
Next: Managing dependencies

Allan then listed the available tools with his pros and cons. The examples were all for AWS leaving me to wonder how well they perform otherwise, particularly since I ditched AWS some time ago.



Basic Provisioners

 - ansible: inheritance, variable configuration, easy, YAML, playbooks, agentless.
 Drawbacks: difficult to track created instances, supplier specific wrapper, versioning is DIY, basic.

 - Chef: based on Ruby, fluent way of pushing out notifications, variable capabilities, cookbooks synchronised with the chef server.

Ddrawbacks: dedicated management server, ruby, plugin infrastructure, OS and package restrictions

 - Puppet: simple syntax, server model, automation

Drawbacks: puppet specific language, complex infrastructure, dependency based, 

Hosts

CloudFormation: Complete physical IAC, JSON, AWS, automation
drawbacks: AWS, JSON can be difficult to create and maintain with no comments, not idempotent, templates are very large.

Infrastructure pieces: 
 - software management, host management, resources
 - general tool that provides one but not the other
 - arbitrary scripts

Combinations
 - s/w dep managed
 - hosts instatiated or made available on demand

TerraForm: orchestrate and provision, syntax is easy
DB: Tightly integrated with vendor, syntax, delays, newcomer

ManageCloud: complete solution, simple, built in versioning, open vendor, framework approach.

DevOps
 - automation is key aspect, hybrid role, interface 

Workflows
 - focus on processes
 - need to be suitable for the tool
 - size of infrastructure

Decisions
 - situations make decisions difficult
 - complete solutions not always necessary

Other tools: CFEngine, Salt, Heat, OneOps

Future: Better ways to share and extend, inheritance, complete control with automation of infrastructure sets, simple.
The afternoon sessions followed, fortunately they were lighter than the previous sessions, giving the brain some time to heal.


1/5: Cloud Anti-Patterns by Casey West

Casey West made a reappearance with a humorous take on going cloud native. He did this by treating resistance to cloud adoption as a mental illness with five stages similar to the stages of depression and listing the "anti-patterns" that oppose adoption. The dual emphasis was on architecture, automation and the delivery pipeline.

Denial
 - Containers are just like tiny VMs
 - We don't need to automate continuous delivery
 
Anger
 - Works on my machine
 - Dev is just #yolo-ing to prod

Bargaining
 - We crammed this monolith into a container and called it a microservice
 - Bi-Modal IT (Gartner) as an excuse not to change
 - Legacy s/w - anything you can't iterate quickly enough
 - What if we create microservices that all talk to the same data source.

Depression
 - We made 200 microservices and forgot to setup Jenkins
 - We have an automated build pipeline but release twice a year
 - Work backed up and not released is risk

Acceptace
 - All software sucks
 - Respect CAP theorem
 - Respect Conway's Law
 - Small batch size works for re-platforming, too
 - Automate everything

Operability is:
1. MicroServices architecture
2. Devops culture
3. Continuous delivery


The last two talks I got very little out of, my brain having mostly shutdown at this point in time.

1/6: Cloud Crafting – Public / Private / Hybrid by Steven Ellis

What I got out of this is to involve people in the automation process - PaaS (People as a Service) with ServiceNow.

The idea is for your monitoring systems to generate detailed service requests rather than waiting for someone to notice the red flag and investigate what's wrong.


1/7: Live Migration of Linux Containers by Tycho Andersen (Canonical)

This talk was about LXC and the promises of LX(C,D) 2.0 not yet released. It looked quite interesting, but by this time I was barely awake.

Tomorrow: The SysAdmin MiniConference.