The impact of friction on doing the right thing

Share this:   

Friction vs. Momentum in Life

This post is not focusing on the aspects of friction and momentum in a physics sense, but rather in a philosophical sense, and the way they impact a person’s business and personal life.

Basically, if you reduce friction on an object or in a process and increase its velocity you will get sustained movement – momentum. Conversely, the more friction a process has in it, the more likely it is to grind to a halt without something or someone moving it along.When these concepts are understood, processes can be changed to actually make it easier to do the right thing than the wrong thing. This has huge implications in all aspects of life and makes it much easier to
hit goals and objectives.

Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other.

Source: Wikipedia

In classical mechanics, linear momentum or translational momentum is the product of the mass and velocity of an object. Like velocity, linear momentum is a vector quantity, possessing a direction as well as a magnitude:

Source: Wikipedia

Impact of friction: Hurricane Sandy Red Cross Donations

Like several hundred million other people, I’ve been watching a lot of the news about Hurricane Sandy that hit the east coast last week. The devastation that occurred is unreal.

Being in Michigan I was lucky enough to just get some wind and a bit of rain. I didn’t even lose power for any part of the storm.

Friday night (11/2/2012) I tuned into the Red Cross / NBC Hurricane Sandy telethon. Aerosmith, Bruce Springstein, Sting and others were on and it was a great show. The telethon raised almost $23 million to help victims of the tragedy. During the show they mentioned how to donate to the Red Cross to help out the victims of the storm on an iPhone. Being that I have an iPhone and was going to donate anyways (Link to the Red Cross Hurricane Sandy donation webpage), I flipped through to see how easy it was.

Apple did an outstanding job with the process and technology to donate. The process is as smooth and painless as possible. To donate to the Red Cross all a person has to do is:

  1. Go into iTunes.
  2. Click on Red Cross logo on the homepage.
  3. Sign in with iTunes password.
  4. Pick how much money to donate from a drop down.
  5. Click OK and you’re done.

4 clicks and a password to help hurricane victims. Pretty amazing really……

Apple has eliminated all friction in donating to the hurricane victims for the millions of people with iPhones. This drives better results than a clunky or frustrating process that is so common today.

How many less people would donate if they had to fill out a long form with a bunch of useless info????? Sure, this is common sense but still not common to see in practice.

Apple eliminated friction in the donation process and thereby made it easier to do the right thing than it is to do the wrong thing.

When its easier to do the right then than the wrong thing, positive momentum can be created which can then build on itself and create long lasting change.

There are a lot of ways this impacts both business and personal life that we’ll explore at a later date.

Any way that friction can be eliminated from a part of one’s life, either in business or personal, the better the outcome will be.

More posts by Eric

PS – Here is another link to the Red Cross Hurricane Sandy donation webpage

Subscribe to the Solid Logic Blog

Share this:   

Website Security – Interesting 65Gbps DDos Case Study

The web can be a scary place with all sorts of website and internet security issues that may arise when you’re running a public facing site. Issues like cross site scripting, SQL injection, email harvesting, comment spam, DDos attacks and others, occur on a daily basis.

There are many different ways to combat these problems with varying levels of success. The is a huge industry of web security software and tools out there. The industry is changing rapidly due to changes in infrastructure because of cloud computing. The one approach that does not work well (and never has) is the set it and forget approach that many people use when they create a new site. Since website security is an on-going challenge, it’s best to use professional level services and stay up to date with everything. Unfortunately, some of these services can be quite pricey.

We take website security and performance very seriously and offer a range of services in these areas. We use multiple services and techniques to protect all of the public (and private) sites we create. One of the methods is a security service called CloudFlare which we will describe below and walk through a case study they published over the weekend.  Here is a quick overview of the service: CloudFlare is a quickly growing, venture capital backed web security and performance start-up.

CloudFlare presently serves over 65 BILLION (yes Billion, not Million) pageviews a month across their network of sites they support.

Here is some perspective on their size from a VentureBeat article: “We do more traffic than Amazon, Wikipedia, Twitter, Zynga, AOL, Apple, Bing, eBay, PayPal and Instagram combined,” chief executive Matthew Prince told VentureBeat. “We’re about half of a Facebook, and this month we’ll surpass Yahoo in terms of pageviews and unique visitors.”

They have a great list of features:

  • Managed Security-As-A-Service
  • Completely configurable web firewall
  • Collaborative security and threat identification – Hacker identification
  • Visitor reputation security checks
  • Block list, trust list
  • Advanced security – cross site scripting, SQL injection, comment spam, excessive bot crawling, email harvesters, denial of service
  • 20+ data centers across globe
  • First-level cache to reduce server load and bandwidth
  • Site-level optimizations to improve performance

Over the weekend they had some interesting events happen in their European data centers and wrote a couple blog posts about it. Linked here and summarized below:

What Constitutes a Big DDoS?

A 65Gbps DDoS is a big attack, easily in the top 5% of the biggest attacks we see. The graph below shows the volume of the attack hitting our EU data centers (the green line represents inbound traffic). When an attack is 65Gbps that means every second 65 Gigabits of data is sent to our network. That’s the equivalent data volume of watching 3,400 HD TV channels all at the same time. It’s a ton of data. Most network connections are measured in 100Mbps, 1Gbps or 10Gbps so attacks like this would quickly saturate even a large Internet connection.

 

To launch a 65Gbps attack, you’d need a botnet with at least 65,000 compromised machines each capable of sending 1Mbps of upstream data. Given that many of these compromised computers are in the developing world where connections are slower, and many of the machines that make up part of a botnet may not be online at any given time, the actual size of the botnet necessary to launch that attack would likely need to be at least 10x that size.

 

In terms of stopping these attacks, CloudFlare uses a number of techniques. It starts with our network architecture. We use Anycast which means the response from a resolver, while targeting one particular IP address, will hit whatever data center is closest. This inherently dilutes the impact of an attack, distributing its effects across all 23 of our data centers. Given the hundreds of gigs of capacity we have across our network, even a big attack rarely saturates a connection.

 

At each of our facilities we take additional steps to protect ourselves. We know, for example, that we haven’t sent any DNS inquiries out from our network. We can therefore safely filter the responses from DNS resolvers. We can therefore drop the response packets at our routers or, in some cases, even upstream at one of our bandwidth providers. The result is that these types of attacks are relatively easily mitigated.

 

What was fun to watch was that while the customer under attack was being targeted by 65Gbps of traffic, not a single packet from that attack made it to their network or affected their operations. In fact, CloudFlare stopped the entire attack without the customer even knowing there was a problem. From the network graph you can see after about 30 minutes the attacker gave up. We think that’s pretty cool and, as we continue to expand our network, we’ll get even more resilient to attacks like this one.

Link to original post: http://blog.cloudflare.com/65gbps-ddos-no-problem

The big takeaway for us is that we’re in a better spot by using CloudFlare. There are very few security software tools or services out there that would be able to handle this sort of attack, mitigate it and then describe it in such a short period of time.

Subscribe to the Solid Logic Blog

Software Development Life Cycle (SDLC) Case Study – Result = $440M Loss

Share this:   

 

Software Development Life Cycle (SDLC)Importance

Solid Logic Technology’s foundershave experience across the financial industry and specifically in the development of quantative trading and investment systems. Many of the things we’ve learned along the way impact the way we develop software for clients across other industries. Most notably we’ve learned that software quality is extremely important and ‘software bugs’ cost lots of money. The study below shows how important in-depth software development, testing and launch management is for a company.

As early as August 1st, 2012 reports came out that Knight Capital Group,  a prominent electronic market-making firm specializing in NYSE equities, lost an estimated $440 million dollars due to a ‘software bug’. The news spread across financial news networks like Bloomberg, NY Times, CNBC and The Wall Street Journal.  Knight and other similar firms, trade US equities electronically using sophisticated computer algorithms with little to no human involvement in the process. While we will probably never hear the full story behind the ‘software bug’, it is suspected that a software coding error that was not quickly identified caused the loss. The loss is approximately 4 times their 2011 net income of $115m. It appears to have pretty much decimated the firm and at this point it looks like the firm will be bought or end up in bankruptcy.

While unfortunate, this example has some implications across any software project.

So what can we learn and take away from this incident?

  1. Software is not perfect, especially right after it is released
  2. A more comprehensive Software Development Life Cycle (SDLC) process and launch plan probably would have reduced the loss to a more reasonable amount.
  3. Always have a contingency plan for a new launch
  4. If new software is ‘acting funny’ then it probably has a problem and needs to be pulled from production and fixed
  5. When possible, conduct a series of small ‘pilots’ or ‘beta’ test along the way in a lower impact way
  6. If you cannot fully test the changes, then implement them slowly to minimize the potential errors in the beginning
  7. Have a ‘kill switch’ and know how to use it
  8. Have a formal SDLC process and follow it for all revisions
  9. Use source control for all software changes
  10. Have a defined launch process
  11. Have a way to quickly revert the changes implemented back to the previous version.

These are basic best practices that all software development firms should follow in order to consistently develop high quality software. Its unfortunate that there is a case study like this but these type of incidents are more common (but not to this scale) than most people imagine. I’m sure the group at Knight completed many of the above items, but something got away from them.

We put a huge amount of thought and effort into the process of software development and the consistent high level of quality that a solid process brings. We’re currently working on publishing a set of Software Development best practices – please contact us for a pre-release version.

More posts by Eric

Subscribe to the Solid Logic Blog

Share this:   

Amazon EC2 Cloud Computing Cost Savings

This post is a long one and is part of an on-going series of some of the benefits we’ve identified in our experience in using Cloud Computing technologies, most notably Amazon Web Services (AWS) and different VMware products.

Overview

“The cloud”, specifically Amazon Web Services, has dramatically changed the landscape of High Performance Computing and Big Data Processing in recent years. Many things are computationally possible that would not have been a few short years ago. An organization can cost-effectively setup, launch and use a seemingly limitless amount of computing resources in minutes.

Most of the news media today is focused around using Hadoop on “Big Data”. SLTI has experience with this technology, but what happens if your task data set doesn’t fit nicely into this framework?? The writeup below is how we handled one such challenge.

Business Problem

The problem SLTI was trying to solve fits into the Business Intelligence/Data Mining area in the financial industry. The problem tested different inputs for an algorithm that is the basis for a quantitative equity trading system.
The algorithm had complex mathematical calculations and processing requirements across a large and diverse data set. The problem required testing a wide range of input parameters across four dimensions. The algorithm was tested across sixty-two different data sets. A summary of the size of the problem is shown to the right – We basically have to analyze 9.9 billion data points to come up with something actionable.

While the program logic is specific to the financial trading industry, it has many common concepts shared across different industries – engineering, legal services, etc. The question to ask is simple -

How many processing tasks have you had that you wished ran faster? Can they be split into multiple pieces and run in parallel? How can this be done cost-effectively?

Information Technology and Software Solution

Cloud Computing has dramatically changed the cost landscape of IT Infrastructure, especially for prototype or short run projects like this one. In a general sense, CPU cycles and RAM are cheap compared to the highly skilled labor required to improve performance by several orders of magnitude.

Our goal was simple – make the program to run a lot faster with minimal effort.

We have a large list of projects to be completed so development time is our most precious resource so we didn’t want to re-write the entire program. We kept the software changes and technology solution simple – it’s basically an 80/20 approach to setup the infrastructure and handle the code changes that still solves the problem, albeit in a less elegant fashion.

To accomplish our goal, we modified the program  to operate on a user-defined subset of the original data set. This allows the problem to be split into many small parts and spread apart across multiple servers. We can then distribute the pieces to each server to handle the processing for that subset.  

IT Infrastructure Architecture

In staying with a 80/20 simple solution first approach, we created a solution with the following pieces:

  1. Linux based application server (Amazon EC2 Amazon Machine Image (AMI), alternatively a VMware image could be created and converted to an AMI.
  2. Highly-Available, scalable, central filestore (Amazon S3)
  3. Master configuration data stored in Amazon S3

The cluster itself is comprised of sixteen cc2.8xlarge EC2 instances. Each instance has 88 Compute Units, has 2 x Intel Xeon E5-2670 processors (16 cores per instance), 60.5GB of RAM, 3370GB storage. The cluster provided 1408 Compute Units, 256 Cores and 968 GB of RAM.

The basic logic of the program goes something like this

  1. Load all required data into Amazon S3
  2. Launch the pre-configured AMI to run the program after the server launches
    • Get a specific subset of the data for the node from the central filestore
    • Update the master configuration data to notify the other nodes what data still needs to be processed before, during and after each test run.
    • Save the results to the central filestore
    • Shutdown the node after the work is completed

Cost Analysis

This is not intended to be a totally comprehensive cost comparison but rather a quick TCO comparison using some standard costs. To quickly do this, we used the AWS EC2 Cost Comparison Calculator on the bottom of this page.

SLTI’s EC2 based approach is roughly 99.5% cheaper than an in-house solution.  There are other similar examples of the ROI of an EC2 based approach for this type of workload here 

Key Takeaways

  1. Using the cloud enables a much more adaptive and scalable data processing infrastructure than in-house IT hardware.
  2. If you’re not using AWS (or something similar), you’re overpaying for IT infrastructure, especially for short run or highly variable workloads.

This post is a short overview on some of the ways we’re using advanced cloud computing technology to help our clients improve their IT agility and reduce IT expenses. We’re currently working on a few case studies that describe these concepts in more detail. To get updated with new research just sign up using the form on the right of this page.

If you’d like to explore a specific use case for your situation – please contact us

Subscribe to the Solid Logic Blog

Cloud Computing Categories

This post is the second in an on-going series of some of the benefits we’ve identified in our experience in using Cloud Computing technologies, most notably Amazon Web Services (AWS) & different VMware products. The first post focused on some of the financial benefits and cash flow impacts of technologies we use on all client projects.  This post will introduce some differences between “cloud computing” technologies in order to set the statge to discuss how the products in those categories can be used and some of the benefits of them.

Cloud Computing Definitions & Categories

Because “the cloud” is a very vague and overused term these days – we need first define some things before diving into the impact and benefits of them. In the interest of time, we’ll use a summary of Wikipedia’s definitions rather than creating our own.

Virtualization – “In computing, virtualization (or virtualisation) is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system, storage device, or network resources.” The most notable example of this software is made by VMware (ESX, Workstation, etc.).

Source: http://en.wikipedia.org/wiki/Virtualization

Cloud Computing & Infrastructure As A Service (IaaS) - “Cloud computing refers to the delivery of computing and storage capacity[citation needed] as a service to a heterogeneous community of end-recipients. The name comes from the use of clouds as an abstraction for the complex infrastructure it contains in system diagrams[citation needed]. Cloud computing entrusts services with a user’s data, software and computation over a network.”  Amazon Web Services (AWS), Rackspace, etc. would fall under this category.

Source: http://en.wikipedia.org/wiki/Cloud_computing &

Platform As A Service (PaaS) – “Platform as a service (PaaS) is a category of cloud computing services that provide a computing platform and a solution stack as a service. Along with SaaS and IaaS, it is a service model of cloud computing. In this model, the consumer creates the software using tools and libraries from the provider. The consumer also controls software deployment and configuration settings. The provider provides the networks, servers and storage.” Heroku, PHPFog, AppFog, etc would fit into this category.

Source: http://en.wikipedia.org/wiki/Platform_as_a_service

Cloud Computing Technology Stages

These categories have been introduced over the past nine or so years and have matured considerably in recent years. Many of the products that we looked at a year ago and felt were not ready for prime time are now ready Our use of cloud computing technology has matured considerably and we’ve followed the path shown in the diagram below.

Our preference at this point is to use a Platform-As-A-Service where appropriate. When that is not possible, we’ll use a customized configuration running on Amazon Web Services or another cloud provider.  In the next post we’ll discuss some of the benefits of this approach and the impact these decisions have had on our development process.

Subscribe to the Solid Logic Blog