Thursday, July 31, 2014

Why I am letting my ACM membership lapse

I have been thinking about this lately but interestingly enough, Slashdot is carrying an article on why ACM is not interesting to programmers (link to original article by Vint Cerf).

For me it is pretty simple - ACM and the Communications of the ACM used to be a place where seminal papers and new algorithms used to be published. A place where you could explore clever ideas, tricks and fundamental issues all in once. Now the CACM publication has turned into a semi-scientific, semi-IT-manager set of articles with absolutely no attention paid to people who want to practice _computer science_. I swear, if I read one more article on "enterprise IT something", I am going to organize a public burning of the copy of CACM in question...

Friday, July 18, 2014

Saturday, July 05, 2014

Of horses, rodeos and people

This 4th of July, like every 4th of July, we went the local rodeo. It is a nice celebration and just being in close proximity to so many horses and cattle makes it all worth while.

However, this year an ugly conclusion struck me out of nowhere: if you love horses and cattle, you should not be at a rodeo! Why? Well, for one, the level of horsemanship at the rodeo is appalingly low. I am not a high-falootin' horse whisperer, in fact, I consider myself to be a beginner with only four or five years experience, however, I have high standards of what the horse should be doing, how they should be doing it and what tools the horseman or woman should be using to get things done. I may not know how to get there yet but I do know what I want and don't want.

First off, almost every horse at the rodeo was ridden with a tie-down and a shank bit in their mouth. The “horsemen” riding these horses were all heavy handed, it hurt just watching things unravel. Nine out of ten horses were prancing and jigging with the owners pulling one way and the tie-down pulling the other way with the horse's mouth in the middle.

It is strange because at the beginning I thought it must be the steer-wrestlers or the ropers or the barrel-racers that had the bad and antsy horses who just couldn't sit still (despite the tools and the heavy hands). Then I realized that the “pickup horses” were the same (by pickup horses I mean the horses used to untie and catch/steer the cattle, the bulls and the broncs).

Even the team of ladies who came in to help kick off the rodeo by doing their show with the American flag at a canter and a gallop had bad horses. They all looked uncontrollable and some of them looked like they could come unhinged within a matter of seconds. With the exception of two horses, that whole team too had tie-downs and harsh shank bits.

By the end of it all, I just thought to myself how awful it all is. These people are horseback almost every day and they are what signifies America's history and love of horses and horsemanship. They are what the general public comes to watch, no, pays to watch in some delusion that horsemanship experts are in front of them putting on a show.

How sad! But you know who I feel worst for? Those poor horses. Even with all the tie-downs, shanks and heavy hands (and sometimes whips as an added bonus), most of these animals are nice and still do what they are asked to do. If it were me out there with a saddle on my back, my chest tied to my head and a 3 inch shank pulling on my mouth, I am not sure I would be that nice.

Happy 4th of July!!

Wednesday, June 25, 2014

Is Scala the C++ of the Java world?

I have spent considerable time learning Scala. Read a few books, wrote some code. It is not a bad deal - you get the world of Java, all its libraries, all the code ever written in it - all wrapped in a nice object-oriented and functional approach.

However, the complexity of the language seems daunting at first. There are many exceptions, catches, constructs and assumptions in the language. No matter how many books you have read and how much code you have written, it always feels like you missed something, maybe a chapter in some book or some feature that could have made your code better or prettier or completely different. Many libraries outside what comes with the language are half-baked, in a state of flux, they all seem to be built differently. There are multiple frameworks for doing simple things, they all have some advantages and some drawbacks over the competition. Then there is SBT - where to begin with that one? ;-)

People keep saying more choice is better but what if many of the choices are half-baked or in a state of constant, never-ending flux? I will quote an old German proverb: "He who has a choice, has the doldrums".

Then there is Play 2, Akka and a myriad of other outside frameworks almost as complex (if not more complex) than the language itself - many of them trampling on each other by re-implementing some of the functionality of the other. Add to the mix ScalaZ - hey (!), I thought Scala was already supposed to be functional! ;)

At the end there is the question of being just able to sit down and write what works without much boilerplate and rituals to basically satisfy the compiler or the build tool. Python allows it (and I do not particularly like Python), C is easy, heck, even writing a piece of code in x86 assembly with gas/ld is easier and faster to get going!

So, why stick with Scala?

This article briefly outlined the negatives. In the next article I will weigh in with the positives and discuss whether it is just better to stick to Java or go the extra step and invest in learning Scala. I will also try to answer the question of why I brought in the (admittedly nasty) comparison to C++.

Friday, May 30, 2014

Case study in Scala: Avoiding imperative programming (does it pay off?) -> Part 1, The Ugly

Just for fun, let's consider the imperative example of counting words in a string, as presented in the Chapter 17 of Programming in Scala book by Odersky et al. In it, we have:

scala> def countWords(text: String) = {
  val counts = mutable.Map.empty[String, Int]
  for (rawWord <- text.split("[ !.,]+")) {
    val word = rawWord.toLowerCase
    val oldCount =
      if (counts.contains(word)counts(word)
      else 0
    counts += (word -> (oldCount + 1))
countWords: (String)scala.collection.mutable.Map[String,Int]

scala> countWords("See Spot run! Run, Spot. Run!")
res30: scala.collection.mutable.Map[String,Int] =
Map(see -> 1, run -> 3, spot -> 2)

Let's try and do it in a recursive/functional way, without mutable Maps:

scala> def countWords(s:String):Map[String,Int] = {
   def countW(str:List[String], 

              acc:Map[String,Int]):Map[String,Int] = {
     if (str.isEmpty)
     else {
       val k = str.head
       if (acc.contains(k)) {
         val wCtr = acc(k)+1
       } else
  countW(s.split("[ !,.]+").toList,Map[String,Int]())

scala> countWords("See Spot run! Run, Spot. Run!")
res21: Map[String,Int] = Map(see -> 1, spot -> 2, run -> 3)

Neither of these implementations are pretty. The recursive one essentially does the same thing as the iterative one but it does it in a "clunkier" way - we actually have to throw out the (String,count) pair and re-add it every time the same word is encountered. However, it does the job,

Let's see if we can improve on this in the next installment.

Thursday, May 22, 2014

Hacking on a FreeBSD port

I have been trying to check out the CURRENT branch of FreeBSD so that I can start playing with the kernel - ran into a tiny problem: for some reason FreeBSD folks use subversion (yuck) to manage the code. The port of subversion in the ports tree, however, segfaults on my machine. It does generate a core so I recompiled the port with debug symbols and then ran gdb on the program and the core file and got a file/line within the file where it is segfaulting. All I want to do is insert a printf() statement there.

Below is the sequence of commands to do it - I am sure there are better ways but hey...

(as root):

cd /usr/ports/

make clean distclean fetch

make extract

cp work/(path/filename) work/(path/filename).orig

vim work(/path/filename)
make the change

make makepatch (this will add your change to /usr/ports//files/ for "next time")
rm -rf work

make fetch
make extract
make build

The produced executable under /usr/ports//work/ should now have your change in it.

Wednesday, May 21, 2014

FreeBSD ports compile with DEBUG symbols (so you can gdb the core files later ;)

make WITH_DEBUG=yes STRIP= install

After this, if your port's executable generates a core file you can do
gdb /usr/local/bin/(port executable name) ~/(port executable name).core

Wednesday, March 26, 2014

Why is Facebook REALLY buying Oculus

On the heels of this news - the REAL reason Facebook is buying Oculus is because Facebook knows that technology marches on and the VR world has the potential to kill Facebook. Think about it, Facebook is so 2D - flat posts, pictures, what I ate this morning (as if anyone cares). 3D and VR has the potential to change all that in ways unseen. Zuckenberg is either buying Oculus to jump in on that gravy train or to prevent someone else from acquiring the company and making Facebook obsolete in a new kind of a VR way. Either way, Facebook can only see upside with this acquisition, not downside. If Zuck decides to kill Oculus quietly, well, he killed the most advanced possible threat to Facebook's two-dimensional domination. If he doesn't kill it, imagine the scary world of 3D Virtual Reality Facebook EVERYWHERE.

Friday, March 07, 2014

Scalatra, Akka actors and default timeouts

This official Scalatra page claims that setting implicit val timeout = value seconds will set the time interval before the dreaded "Gateway Timeout" error to value seconds in your code. However, as of 2.2.2 this is NOT TRUE.

No matter what timeout is set to, Scalatra will kick out after 30 seconds - see why here.

In order to actually set the timeout to whatever value you need, add the code below to your Servlet

implicit val timeout:Timeout = 120 seconds
override implicit val asyncTimeout:Duration = 120 seconds

Thursday, February 27, 2014

Easy Spark development on Amazon

I have a VPC set up on Amazon for our data pipeline - from the get-go, it was a prerogative of mine to make everything as secure as possible. As part of this setup, I have created an OpenVPN gateway to access the "inside" of the pipeline. This whole setup takes some time and is a bit intricate - I will document it in a different post.

The goal of this post is to share my "workflow" for developing and testing Spark apps. Inside the VPC is a set of 16 nodes which are the standalone Spark cluster. The same 16 nodes are also a part of a Hadoop HDFS cluster where each node's ephemeral disk space (1.6TB on each machine) has been set up as a RAID0 array (Amazon gives you 4x400GB partitions) and these RAID0 arrays are a part of the HDFS pool. The difference in speed is very noticeable compared to EBS attached volumes, but I digress - as I said, this will be a topic for a different article.

I work at home on my Macbook Air but my cluster is on Amazon. Since I use OpenVPN, I purchased a copy of Viscosity to be able to connect to the VPC. There is also Tunnelblick which is free but I have found it to be "flaky" and a bit unstable (personal opinion/experience - YMMV) compared to Viscosity which has been solid and at $9/year the price cannot be beat.

So, workflow:

1/ Fire up Viscosity, establish VPN connection to VPC

2/ Mount a directory on the Spark cluster where I will be running my application code - I use sshfs:
shfs -o sshfs_sync -o sync_readdir sparkuser@spark-master:/home/sparkuser/spark_code spark-master/

To do this you will need to set up your ssh via keys.

3/ Now that my remote folder is mounted locally in spark_master/, I use SublimeText to edit my files. Each save is immediately propagated to the remote machine. This may or may not be slow, depending on your setup/connectivity.

4/ ssh sparkuser@spark-master

5/ cd spark_code/whatever_directory_my_current_project_is_in

6/ Run sbt

7/You can run sbt so it is sensitive to files changing in the project - it will trigger an automatic recompile each time a file changes. The simples way is to do
> ~ compile

In any case, hope this helps :)

Scalatra with actors command Spark, at last I figured it out

In this post I echoed my frustration about being stuck in figuring out how to share a Spark context among akka actors in a Scalatra environment. Well, I was wrong to say it is difficult or impossible, it turned out to be pretty easy. Sigh. At least I figured it out myself ;)

In any case, I started with a basic giter8 project as per Scalatra website.

g8 scalatra/scalatra-sbt 

Answer all the questions and move on to the directory that was created. Mine is called "pipelineserver" and my package will be com.github.ognenpv.pipeline

giter8 would have created a few important files:


Let's go through them one by one:


Add or change the following to the contents of it, in appropriate places, leave the rest be:

  val ScalaVersion = "2.10.3"
  val ScalatraVersion = "2.2.2"

        "org.apache.spark" % "spark-core_2.10" % "0.9.0-incubating",
        "org.apache.hadoop" % "hadoop-client" % "2.2.0"

I use hadoop 2.2.0 since my Spark standalone cluster was compiled against that version of Hadoop libraries and I run an HDFS filesystem based on that, on the same cluster.


package com.github.ognenpv.pipeline

import{ActorRef, Actor, ActorSystem}
import akka.util.Timeout
import java.util.concurrent.TimeUnit
import org.scalatra._
import scalate.ScalateSupport
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.slf4j.{Logger, LoggerFactory}
import scala.concurrent.ExecutionContext
import org.scalatra.{Accepted, AsyncResult, FutureSupport, ScalatraServlet}

class PipelineServlet(system:ActorSystem, myActor:ActorRef) extends PipelineserverStack with FutureSupport {

  import _root_.akka.pattern.ask
  implicit val timeout = Timeout(36000)
  protected implicit def executor: ExecutionContext = system.dispatcher

  get("/") {
  myActor ? "first20"

  get("/count") {
        myActor ? "count"

  get ("/test") {
    myActor ? "test"

class MyActor(sc:SparkContext) extends Actor {
  // change the following two lines to do whatever you want to do
  // with whatever filesystem setup and format you have
  val f = sc.textFile("hdfs://")
  val events = f.filter(_.split(",")(0).split(":")(1).replace("\"","") == "Sign Up").map(line => (line.split(",")(2).split(":")(1).replace("\"",""),0)).cache

  def receive = {
    case "count" => sender ! events.count.toString
    case "first20" => sender ! events.take(20).toList.toString
    case "test" => sender ! "test back!"
    case "_" => sender ! "no habla!"

The system is very simple - we create an actor, one that will receive the SparkContext and read in a basic file from HDFS, do some basic parsing via some Spark actions and cache the result of the parsing. The result is an Array of tuples, each  looking like this (id:Long, tag:Int), where tag is a 0 or a 1.

This actor will be responsible for doing things to the cached result, when asked by the Scalatra servlet, as a result of the route being served. It will execute the spark action and return the result to the Scalatra servlet. We are not doing any error checking, for simplicity. We are also not trying to be smart about timeouts and synchronization, for simplicity (herein somewhere lie Futures? ;)

Finally, this is the contents of the Scalatra bootstrap class:


import{ActorSystem, Props}
import com.github.ognenpv.pipeline._
import org.scalatra._
import javax.servlet.ServletContext
import org.apache.spark._
import org.apache.spark.SparkContext._

class ScalatraBootstrap extends LifeCycle {

  val sc = new SparkContext("spark://","PipelineServer","/home/sparkuser/spark")
  // adjust this to your own .jar name and path
  val env = SparkEnv.get
  val system = env.actorSystem
  val myActor = system.actorOf(Props(new MyActor(sc)))

  override def init(context: ServletContext) {
    context.mount(new PipelineServlet(system, myActor), "/actors/*")

  override def destroy(context:ServletContext) {

In this class we create the actual SparkContext. Beware that when you run sbt, you should execute the packageBin task first to create the jar that you will feed to your SparkContext. If you do not do this, Spark actions like count (which are "jobs" executed via passing closures around Spark's actor system, it would seem, this is my uneducated guess) will fail because Spark will not be able to find the class necessary to pass around the closure being executed.

Notice also that we are using Spark's ActorSystem obtained via the SparkEnv which is given to every thread (SparkEnv.get). We then use this ActorSystem to create our actor responsible to executing SparkContext actions. Also notice that on destroy() we are asking Scalatra to "stop" the current SparkContext so that we can exit the Spark portion of the servlet cleanly.

Change the necessary variables when creating the SparkContext and you should be good to run the code on your Spark cluster (I am running a standalone cluster in a VPC on Amazon).

> sbt
> packageBin
> container:start

Enjoy :)

Wednesday, February 26, 2014

Spark's "weird" context sharing rules - how I killed a week trying!

"I spent about a week on this."

That's my opening statement intended to fully depict my level of frustration. When you are facing deadlines (as many of us do in the real world), it is not easy to spend a week or two, just like that, with not much in terms of results. But, I digress, politely.

So, what's the problem? Well, imagine you spent some time and wrote a bunch of "queries" to crunch your "big data" in Spark. It holds a great promise - it is a much faster reimplementation of Hadoop, in memory. It can read, no, devour files in HDFS in parallel. It can do streaming, almost-in-real-time queries. It can do 0MQ, it can do parallel access to files on Amazon's S3, it can talk Cassandra, Mongo. It is written in Scala, it can do Akka actors talking to each other, it features Resilient Distributed Datasets that are reproducible (hence the resilient part).

Only problem is, the thing is undocumented. Well, almost documented. Well, that depends on your point of view and how much time you have to hack on it. The documentation is not bad, it is just lacking in the very fine points, the ones you inevitably hit when you start "abusing" the platform ;)

OK, back to my "situation". Let's say you spent some time, you learned Scala, you learned something about what/how Map/Reduce is, now you wrote a bunch of Scala programs to execute Spark queries. Each query is a jar file, that's the nature of the beast, you run these in batches, via a cron job or by hand when needed. Nothing wrong with that. Except....

Except, one day, your head data analyst comes to you and says that it would be nice to have all this stuff exposed via the web in a slick, "exploratory" kind of way. You say, sure, great! I get to learn Scalatra, Bootstrap and more!

You are all gung-ho, walking on clouds. Except....

Except that it turns out you cannot allocate multiple Spark Contexts (main way to drive Spark calculations and actions) from the same app. That's a bit of a problem (and that's a bit of an understatement ;) since a web server by default serves requests in parallel from multiple threads. You cannot pool SparkContext object, cannot optimize, heck, all your dreams have crashed and burned and you risk looking like an idiot in front of your whole team!

(Digression: Scalatra is absolutely awesome, by the way! It is a very clean, nice way to write RESTful APIs, you can serve HTML, Javascript, deluxe. But, it turns out there is a kink here too - I ran into major problems making a Scalatra servlet work with Spark - for some reason allocating a SparkContext from a Scalatra servlet was a no-go).

This drove me to Unfiltered (after I spent 3-4 days and $40 on a Scalatra book and fell in love with the framework, damn it!). Unfiltered is simple, functional, beautiful in a Haskellish kind of way, does not depend on any version of Akka so you can use whichever one matches the version Spark comes with.

OK. Back to sharing SparkContext instances. It turns out the friendly experts at Ooyala thought about this problem a few months ago and wrote a patch to the latest version of Spark (0.9). This jobserver github branch is meant to expose a RESTful API to submit jars to your Spark cluster. Sounds great (!) and it is, it actually works. Except....

Except that it comes with Hadoop 1.0.4 by default. My luck has it that my Hadoop HDFS cluster runs 2.2.0. You can compile Ooyala's jobserver branch of Spark-0.9-incubating with support for Hadoop 2.2.0 the standard way. It assembles. Except that it does not work. You cannot start the standalone cluster, it complains with the following exceptions:

14/02/25 20:15:24 ERROR ActorSystemImpl: RemoteClientError@akka://sparkMaster@ Error[java.lang.UnsupportedOperationException:This is supposed to be overridden by subclasses.
at akka.remote.RemoteProtocol$AddressProtocol.getSerializedSize(
[snipped for brevity]

So, what to do? A week or more later into this nightmare, I ran into a few online discussions claiming that since 0.8.1 (Dec 2013) Spark supports SparkContexts that are "thread safe". To be more precise, this is exactly the promise:

"Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if they were submitted from separate threads. By “job”, in this section, we mean a Spark action (e.g. save, collect) and any tasks that need to run to evaluate that action. Spark’s scheduler is fully thread-safe and supports this use case to enable applications that serve multiple requests (e.g. queries for multiple users)."

In theory, it should work. Stay tuned for the conclusions from the practical experience.

Scala actors and Spark Contexts

One nice thing about Scala is the akka Actor system - idea borrowed from Erlang where it has found great success with its "Let it crash" philosophy.

One of the goals of my work at QuizUp is to create a flexible data analytics pipeline. A nice add-on is exposing this pipeline through a slick web front-end. For this I have chosen Unfiltered, in combination with actors and futures to "command" the Spark pipeline backend.

While I will not go into details of the whole deal in this post, I will post a basic Actor example to run a simple Spark action on a Hadoop based file. It's funny how one simple sentence can translate into heaps of technology and mounds of work ;)

Anyways, below it is.

import{Actor, Props, ActorSystem, ActorRef}
import akka.event.slf4j.Slf4jLogger
//import scala.concurrent.Future
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkEnv 

// this will be our message
case class HdfsFile(filename:String,ctx:SparkContext)

//case class setenviron(se:SparkEnv)

class HelloActor extends Actor {
    def receive = {
        //case setenviron(se) => SparkEnv.set(se)
        case HdfsFile(fn,ctx) => {
            val f = ctx.textFile(fn)
            // send the number of lines in the file back
            sender ! f.count
        case "buenos dias" => println("HelloActor: Buenas Noches!")
        case _ => "I don't know what you want!"

class SenderActor(to:ActorRef) extends Actor {
    def receive = {
        case i:Long => {
            println(s"Sender: Number of lines: ${i}")
        // bounce the message
        case hf:HdfsFile=> { to ! hf }
        case _ => println("Sender: Nevermind!")

object Main {
    def main(args: Array[String]) {
        // use the fair scheduler just for fun - no purpose in this example      
        System.setProperty("spark.scheduler.mode", "FAIR")
        // change the settings in the following line to match your configuration
        // mine is a standalone cluster within a VPC on Amazon
        // you can also substitute spark://; with just local[n] where n>1 to run locally
        val conf = new SparkConf().setMaster("spark://ip_address:7077").setAppName("Hello").setSparkHome("/home/sparkuser/spark")
        //val conf = new SparkConf().setMaster("local[2]").setAppName("Hello").setSparkHome("/Users/maketo/plainvanilla/spark-0.9")

        val sc = new SparkContext(conf)
        // create and start the actor
        val env = SparkEnv.get
        // Use Spark's own actor system
        val system = env.actorSystem
        // create the actor that will execute the Spark context action
        val helloActor1 = system.actorOf( Props[ HelloActor], name = "helloactor1")
        // pass it to the second actor that just acts as a bouncer/receiver
       val senderActor1 = system.actorOf( Props(new SenderActor(helloActor1)), name = "senderactor1")

        senderActor1 ! new HdfsFile("hdfs://path:port/2013-12-01.json",sc)
        helloActor1 ! "buenos días"

        // for operations that take time, we have to wait
        // hence, we will wait forever with the next statement
        // there are better ways to deal with concurrency and timeouts
        // Futures are one of those ways

Wednesday, February 19, 2014

A (n ever growing) list of things that annoy me about the Mac

As a perk (or necessity) of  a new job, I had my employer buy me a new Macbook Air and an iPhone 5S. Here is a list of annoying things about the Apple "approach" so far.

1/ Waking up from sleep is a b*tch! If your Macbook (running Mavericks) falls asleep or you put it to sleep, waking it up on your wireless network may take a while. Sure, it shows it is connected but you can't get anywhere.

2/ Related to 1/ above - you can take a wireless hot-spot down but it may or may not still show up in the list of available wireless hotspots for a while, even though your Mac claims it has refreshed the list.

3/ Macbook Air claims it has AirDrop, iPhone 5S claims it has AirDrop but they cannot communicate with each other. Only iPhone to iPhone or Mac to Mac is allowed, even though they both run the same app. WTF?

4/ Macbook supports Bluetooth and iPhone supports bluetooth but you cannot pair the laptop to the phone. Microsoft much, Apple?

5/ The above two points mean that in order for you to move a few photos from your phone to your laptop (which may be sitting right next to your phone) you have the following choices: a) involved a cord (yuck! so 20th century), b) use iTunes over WiFi (why would I want to involve a massive app to move two photos) or c) use the Cloud (a round trip to the Internet and back to move a photo between a phone and a laptop that are literally two inches away?). I thought Apple was all about simplicity! Even on my old Android phone and Linux laptop I could do a Bluetooth transfer and solve the problem easily.

6/ If you have a Macbook attached to an Apple display via Thunderbolt, there is no predicting what will happen on hook-up and un-hooking of the two devices. One time my laptop just would not wake up anymore.

7/ In a situation where your Mac was attached to an Apple display via Thunderbolt and your laptop's lid is down (hence you are watching everything on the big display), you can plug in your headset into the laptop (since there are no jacks on the monitor) and it will happily ignore the headset. Either provide me with a jack on the monitor or do not ignore the jack on the headset.

I am sure there are more but I have only been playing with this for a few weeks now. Don't get me wrong, I like what Apple brings to the table, for the most part. What I do not like is the superiority approach that many Apple fanboys bring to the table, while happily forking thousands of dollars for an expensive piece of machinery that has its silly problems. In fact, one of my Apple fanboy friends told me that I am "too old school" to be using a Mac. Oh well.

Monday, February 10, 2014

The law of large numbers (but not what you think)

There is an interesting thing happening these days. I suspect it has been happening for a while now, actually.

With the consolidation of the markets within only a few players, quality of services has slowly eroded.

It used to be that as a customer I was king and that customer service meant something. The company had to provide a solidly engineered product and satisfy certain QA standards. Not anymore!

The products are increasingly complex and layered, with each layer come certain benefits for many but problems for some. When these "some" have a problem, well, they are not so important to the company because they do not represent the majority of the customers. In addition, due to so many layers, it is becoming impossible for a single group of company employees to be in the position to help the clients (so they don't).

This has been evident in many fields. Consider telecommunications. It used to be that we all had land lines. They were fixed (yes, you could have a wireless handset, I know) but they were reliant on REAL wires and unless the wires were physically down, you had reliable service. Losing or "dropping" a call was unheard of.

Enter cell phones. They are convenient, they were advertised as better and more versatile. And they are, for many. The telecoms realized that cell phones are a much bigger market to be mined and decided to kill off the land line businesses themselves. Why not? You can only have one or two landlines per household but people are known to have multiple cell phones AND these cell phones get upgraded all the time. You can get locked into contracts, charged overage fees etc. This field has unlimited earning potential!

However, consider how many calls "suck" in quality, how many times they get dropped and consider that there are quite a few people who still have poor reception in their own homes. For these people a good, old fashioned landline would have been the solution (and no, not all of us get high speed internet where we live so no, we can't talk over the internet). Cell phone networks are vulnerable, less stable and exist in "thin air", especially compared to their landline cousins. But, nobody cares, right?

Let's consider the cloud, the email and other "personality" driven services where we have a few monopolies like Google or Facebook or Apple. Google's services experience problems non-stop. Sometimes your connection to their mail front-end experiences delays. Sometimes you get an email telling you that they discovered that some of your emails got incorrectly classified as SPAM, other times you get logged out of your web browser window because you have another Google account open in the same browser (with no rhyme or reason why this is happening). So on and so on.

Or take Facebook - sometimes you click on a photo and it just takes forever opening and it eventually does not open. It takes five or six clicks on the photo to maybe get it to display.

How about Amazon? You can request to watch a movie online and sometimes it will just spin and spin and spin and after an excruciating delay it will tell you that it cannot load the video (clearly a problem on their side). Sometimes you cannot get the video to load for a while and then all of a sudden you can.

Most of these services are reliable most of the time. However, none of them are reliable ALL the time. They "work", kind of, for, well, most people. But when they don't work, it is a) difficult to figure out why and b) you are just a drop in the ocean and Google honestly doesn't care. Even if you pay them $25/month per user on a Business Google App account.

Take Apple. You can spend $2000 on a Macbook and extra $700 on an iPhone 5S and still have no simple way to move a photo you took on the phone to the new Mac laptop without involving a cord or the Internet or a large app like iTunes. As someone told me recently, I may just be too "old school" and I may not understand what Apple is trying to do but: a) both their devices offer Bluetooth (but cannot be paired for some reason), b) both offer Airdrop but it only works iPhone to iPhone or laptop to laptop (retarded, right?). However, most Apple people just use a) a cord, b) iTunes over wifi or c) the cloud to upload the photo and then an additional round trip to download it to the Mac. And I am dumb? Anyways, the people like me, who think that in the 21st century you should just be able to share stuff between two devices without cords or extra roundtrips or specialized app intermediaries are the minority.

What has happened is that all of these companies have decided to charge you money for a BEST EFFORT approach to providing a service. You pay a subscription or for you pay a monthly fee but so long as things work for the majority of the people majority of the time, things are good. The rest is "washed out", the engineers shrug their shoulders at these things as untraceable anomalies, the sales people get paid anyways and the high level sharks still get their bonuses.

So, to recap: extra complexity, too many layers, nobody cares - the small fish are a wash, they are still making their money (even more) and YOU are the donkey.

The way of doing things (or should I say the Apple way of doing things)?

Recently I procured an Apple Macbook Air through work, in addition to a nice Apple Display and an iPhone 5S. Nice and EXPENSIVE pieces of equipment!

 In any case, both the Mac and the iPhone advertise bluetooth. However, you cannot pair the two devices. Both advertise Airdrop but it turns out that you cannot use Airdrop to "drop" files between an iOS and a Mavericks devices, it only works iOS to iOS or Mavericks to Mavericks.

I have an Apple fanboi colleague at work who has drank the Apple coolaid, he had worked for Apple or supported Apple devices for the last 10 years and has a lot of shares of Apple stock. His response was that for me Apple sucks because I am used to doing things the "old school way". Apparently I am too old and retarded to use the new stuff.

His final suggestion, however, was iTunes which will sync via wi-fi. I told him that I do not want to use iTunes, I don't like the app. Then he suggested the Dropbox route. I said, "look, I just want to take ONE PHOTO and move it form the iPhone to the Mac and I don't want to involve the Cloud". I am still waiting for a reply...

Addendum: the reply arrived. I have been told to return the phone since for me (I am so special, right?) Apple tech will not work (like it does for the 100s of millions of people). So, we are back to blaming the user, no?