Edward Capriolo

Monday Mar 19, 2012

More Taco Bell Programming with Solandra


[Read More]

Saturday Mar 17, 2012

The NoDev revolution has started

Developers primary function is wheel reinvention. Take for
example the Solandra https://github.com/tjake/Solandra application which allows the Solr application to be sharded across multiple Cassandra nodes. This application was re-implemented in Hbase using https://github.com/akkumar/hbasene, only to be reimplemented in Hbase again a few
months later in http://www.slideshare.net/KyungseogOh/solbase.  

When developers get tired of directly writing code to solve problems they are paid to solve they move onto generalizations and frameworks. This allows them to write code that does not solve a specific problem, but solves a set of problems that your company does not have. An example of this would be Java based web development. Writing Servlets to access databases quickly becomes repetitive. Changing the servlet specification and creating multiple servlet engines like Jetty, Tomcat, and Glassfish was not enough. The next step for developers is to create large frameworks like Enterprise Java Beans. After years of development, and large sums of money, these things are usually invalidated and people move onto other frameworks like Hibernate, JPA, or the Spring Framework.

Knowing multiple complex systems that accomplish next to nothing allows developers to become lead developers and negotiate much higher salaries. Once developers become advanced enough they typically know about 20 frameworks. They are capable of writing code that grants them a high level of job security because finding another developer that knows those same 20 frameworks is almost impossible. Then they then enter the "other language" phase of their career.

The other language phase is the realization that since none of the 20 frameworks they know are not sufficient to do their job and they must learn some other programming language. The second language is typically very esoteric, useless and non standard like Go, Clojure, or nodejs. They less people in the world that have heard of it the better!

This offers a chance for developers to go back to the wheel reinventing phase under the guise of the new language. Soon the developers will be back to writing frameworks, with funky names like phibernate (This is perl port of Hibernate (http://hibernate.org).) Or they may try uber-integrations combinations like Clojure/spring/hibernate http://www.coderanch.com/t/539167/clojure/Incubator/Clojure-Spring-Hibernate. Functional languages are particularly hot at the moment because they allow programers to revisit languages like Lisp and ML that have not been wheel reinvented since the 60s.

Developers also typically engage in internal reorganization efforts so they can develop useless frameworks faster. A few of these are extreme programming, waterfall, agile, and recently scrum. Combined with no serious formal certification processes other then cheating through college, devs use these systems as a way to divert attention away from the fact that they are reinventing code and instead focus on how quickly and efficiently they are reinventing it.

Now that we understand the role of the developer in the traditional enterprise we can start to understand the NoDev alternative.

Nodev stresses not coding something unless you really-really have to. Our first action is not to fork a codebase on github. For example, if there is a business need for full text search, use puppet to quickly install already existing solutions like Solandra into a development environment. Also use your bash skills to deliver a small relevant proof of concept or quickly write the entire solution without any frameworks. Actions like these save your company thousands of man hours and eliminate the need for a good portion of development.

NoDev focuses on the end user. The end user is not the developer, the end user is the customer or person using the application. End users care about results. End users would rather see a nice report generated by crystal reports rather then hearing about how a developer spent 3 hours designing that same report in functional scala. End users don't care that you spent 6 hours learning and setting up scalading when you can get the same results with a three line hive program. NoDev embraces the fact that sometimes you can save money by spending money, and that solving problems does not always require more devs and more code.

Suppose your company wants to add a new feature to their product. In the ass backwards DevOps fashion Developers have liberal god like access to your
MySQL database. They will likely use some code generation utility that will create the table in the most inefficient way possible creating columns as tinytext rather then varchar(25). They will then use continuous development to put
this poorly designed system into production. Which will likely effect other applications running on the database and cause the DBA to have to work late or worry about load alarms. Meanwhile ops staff will be busily adding more read slaves to handle load as the developer is at home brogramming their next startup idea.

The key focus of NoDevs is carrying a big stick and using it often. First
create users will the minimal privileges needed to accomplish a problem. This may mean for a logging application the user only has access to INSERT to SPECIFIC columns of SPECIFIC tables. Second, insist that developers come to you with the schema design and estimated size of the table, number of reads and writes day for approval.  Developers tend to hate this because according to whatever the flavour of the week coding system (lets say scrum) they now have tight deadlines. Because of there "tight deadlines" they can not afford to have "Operations people" "slowing them down" by telling them that there schema sucks and will need to be re-designed. When they complained about "how they can not complete their waterfall" simply tell management that "We need more DBAs" not more devs, because that is the truth! We do not compromise best practices and give access rights away because we are overworked. Once you start letting developers do schema design there ego becomes super inflated and then they start ideas like NoDev. Because while your are working thanklessly and endlessly behind the scenes cleaning their mess they come to the conclusion that you are not needed any more.

"Other language" problems also have no place in NoDev companies. NoDev enforces a company wide policy of 2 languages only. A scripting language and a compiled language. This eliminates the needs for developers to spend time researching serialization frameworks like AVRO, MessagePack, ProtoBufs, and Thrift to communicate simple strings between languages like OCamel and BashOnBalls. Again, running your enterprise in a NoDev fashion has saved your company thousands of man hours.

NoDev does not hate the cloud. Operations always looks to make things better and more efficient. But a NoDev organization does not allow people that know nothing about operations to tell them how the "cloud" is better. If Operations decides to move to the "cloud" operations do not change. As mentioned noDev will administer our databases in the same strict way. NoDev will still not allow
devs to have root in production in the cloud. NoDev will still not allow users that do not know anything about networking to have firewall control in the cloud. Sudo will still work the same way in the cloud. Normal capacity planning done by ops and management will still decide how resources are allocated in the cloud.  

Now that you understand some basic concepts of NoDev and some examples, we can keep balance in the enterprise. We can stand our ground as operations, DBA, and administrators, switching and routing engineers because for everyone NoOps use case out there we can come up with an equivalent NoDev use case as well.

NoOps prospective:


Dev want to DevOps, fire ops, and move to the cloud with elastic map reduce.

NoDev prospective:

Ops wants to noDev, fire dev, move to Datameer and crystal reports and let business analysts do dev work.

Friday Mar 16, 2012

NoOps + DevOps =YesDilusional No IT professional is safe

I just read  NoOps DevOps Misses the Point . 

NoOps literally means no it profession is safe. I get it us us Ops guys are just janitors, all we do is  "rack servers, run cat-5 cable" and we are useless. We can not write glorious code, so we will never be as useful or important as developers. But now even very competent switching and security professionals are not safe "PaaS means most of us will no longer configure firewalls or load balancers"

"Second, silo’d development and ops cultures and behaviors are problematic. Development trying to maximize change, while ops tries to minimize it, reduces efficiency, responsiveness, quality, and mutual respect, all at the same time. What’s needed is a unified focus on simultaneously maximizing agility and reliability."

By this argument Quality Assurance minimizes efficiency by slowing things down. Ops goal is not to minimize change it is to manage it. If that means change happens slower, that is only a side effect of managing risk.

Soooo.

Didn't SQL slammer teach us NOT to put our database in a DMZ that the entire internet can reach?

"by default CountDB gives everyone in the world admin access to your instance, it also doesn't listen on an external interface"

You can see people bitching back and forth about who's fault this is. "Its the developers, its the administrators, its the firewall manager."

Most developers are not proficient in networking. With good reason, the only way to be proficient in networking is by doing it. There is not enough hours in the day to be great at Java generics and great at border routing and firewalls.

Should a business put a pager in a hand of a guy that really is not a stud firewall manager?

Fuck no!

Do I trust this person who spends most of there time writing code to make a critical firewall change?

Fuck No!

Do I trust someone who is not at all versed in network security standards and best practices to log into a cloud based firewall management screen and secure my CouchDb server?

Fuck no!

"However we label them, these activities need to happen, someone needs to be accountable for them, and they need to be integrated into an overall, coherent set of activities focused on delivering value to customers"

Thank you! These activities are Ops! These are exactly the things good operations people do. This is not a part time job. This is what the DevOps and NoOps people keep missing over and over and over and over again.

Remember in one of my last blogs how I said DevOps was a power grab for "not giving developers root access ". Well, if you need more proof of that statement.

"Once we’ve trimmed and redirected traditional ops work/staff/budgets, what should we be spending our time, resources, and money doing instead?"

So funny... An article that mentions all the important stuff Ops people do. Then wavers with a statement, "However we label them", then moves onto figuring out what they are going to do with our salaries!  




Wednesday Mar 07, 2012

ground computing > cloud computing

Within the last year:

https://status.heroku.com/incident/308 - 90 minute downtime

http://siliconireland.blogspot.com/2012/03/microsoft-azure-outage-leap-year.html

10 hour outage caused by leap year bug

http://money.cnn.com/2011/04/21/technology/amazon_server_outage/index.htm

The outages began Thursday morning just before 5 a.m. ET and were still ongoing more than 24 hours later.  

As it stands cloud computing is

  1. More expensive
  2. less reliable
  3. lower performance (xen + latency+ noisy neighbours)

I think the people who champion cloud computing literally have their head in the clouds.

Do not get me wrong. I do believe software will enter the conceptual age. But not if cloud providers can not keep their shit running! Its all a pipe dream if they can not deliver better uptime then a 4 man ops team.

Think about a data center like a car. Would you rather have a cars that has a  radio with 5 pre-set buttons that periodically cuts off while highway driving, or would rather have a radio dial from the 80s and a reliable engine that starts on a cold day and runs like a champ?

"So where does this all end? NoOps - Where building and running an app is purely a developer process."

NoOps....haha.....that is a good one. If only cloud 2.0 was half as effective at ops and uptime as I am at writing code it would work. Till one of these platforms can actually hit a one 9 of uptime they are toys and not replacing anything.

 

Saturday Mar 03, 2012

What is the deal with IronCount?

You may have noticed that real time analytics is the new hotness. Just about every NoSQL these days seems to support some write without read counter system. Likewise many people are building their own system to funnel data to processing systems and increment said counters .

I view real time analytics as the next etl. By this I mean to say that stuffing tons of data into hadoop and building hourly or daily aggregations was cutting edge up until a minute ago. It is still a critical part of a "web scale" infrastructure but real time analytics is the next/current big deal.

What are the key parts of a (near) real time analytics infrastructure?

1) Data Aggregation

The system has to get data together in a way that it can be processed.

2) Distributed processing

With data being aggregated some algorithm or transformation needs to be applied to it.

3) On demand results

The system has to work at small granularity. This could be one or five minute windows, but possibly smaller, writing and retrieving inside these windows should be low latency.

4) Scale

The system has to support more data, more algorithms/aggregations , and more requests without needing to be redesigned, every part should be horizontally scalable.

I am going to talk about IronCount and why I think it can be a key part of a real time infrastructure. First, a brief history lesson. I came up with the inspiration for IronCount while talking with Joe Stein. I was tired of waiting for twitter to open source RainBird. Which looks like it is never going to happen at this point. I work by the iconic FlatIron building, hence Iron in the name, and Count because the standard use case is typically to read raw data and increment N counters (usually in a system like Cassandra).

Flat Iron Building

IronCount is a consumer manager for Apache Kafka. Lets talk about kafka which solves need #1 in RTA. (also deals with 1,3 and 4)

Kafka offers high throughput distributed low latency producer/consumer queue system. Unlike the JMS message queues it does not try to implement all types of complicated and complex JMS semantics. However it does support one feature that is very useful. Messages are routed by key to the same partition. This is critical because it allows an application consuming messages to know that messages for a key are only being consumed by a single consumer. For example, you can route weblogs by user_id to ensure that all the hits for one user are processed by the same consumer. (note that loss or addition of a consumer changes where messages will route). With kafka's distributed architecture we can process data and horizontally scale out.

Distributed processing is RTA requirement #2 which is where IronCount comes in. It is great that we can throw tons of messages into Kafka, but we do not have a system to process these messages. We could pick say 4 servers on our network and write a program implementing a Kafka Consumer interface to process messages, write init scripts, write nagios check, manage it. How to stop it start it upgrade it? How should the code even be written? What if we need to run two programs, or five or ten?

IronCount gives simple answers for these questions. It starts by abstracting users from many of the questions mentioned above. Users need only to implement a single interface and possibly only a single method handleMessage(Message m).

public interface MessageHandler {
  public void setWorkload(Workload w);
  public void handleMessage(Message m) ;
  public void stop();
  public void setWorkerThread(WorkerThread wt) ;
}

Turns out this pattern makes many problems that seem complicated with other systems easy.

For example, if someone wants to implement something like scribe or flume aka aggregate logs into HDFS.

Caligraphy

What 100 lines of code!? Doesn't this have to be hard?

What about joining and re-routing streams? AKA yahoo's s4.

Map Reduce demo

What? Don't you need some special application made just for this?

What if you wanted to count URLs like rainbird and save them into Cassandra?

Mocking Bird

I think by now you might be getting what I am driving at. Rather then having a highly specialized infrastructure to handle task X, Kafka, IronCount and maybe a little Cassandra or Hadoop can get the job done.

I am not trying to take anything away from Rainbird, Flume, Scribe, s4, Storm or anyone of the other technologies I have mentioned. But if IronCount can demonstrate the same or similar results without having to learn a complex API or implement a large special purpose architecture that says something. With IronCount you can quickly write a Handler to do exactly what you need, rather then dealing with complexity of making another system work well for exactly what you are trying to do.


Calendar

Feeds

Search

Links

Navigation