Posts Tagged With: Improvement

Avoid Priming Interview Candidates

To try and avoid having interview candidate start every interview with the same "tell me your background" boilerplate, we get every interviewer in the room with the candidate at the beginning of the interview for a few minutes, so everyone can introduce themselves and the candidate can walk through their background.

Recently we had a minority candidate with a unique name come for an onsite interview. One person in the loop, who also has an untraditional name, asked "what's the worst butchering of your name you've ever heard?" Seems like an innocuous enough question, and it's possible the candidate didn't think twice about it.

But it stuck with me because of a paper I saw recently, and because we may have unknowingly implicitly activated a social identity. Cue the research:

Recent studies have documented that performance in a domain is hindered when individuals feel that a sociocultural group to which they belong is negatively stereotyped in that domain. We report that implicit activation of a social identity can facilitate as well as impede performance on a quantitative task. When a particular social identity was made salient at an implicit level, performance was altered in the direction predicted by the stereotype associated with the identity. Common cultural stereotypes hold that Asians have superior quantitative skills compared with other ethnic groups and that women have inferior quantitative skills compared with men. We found that Asian-American women performed better on a mathematics test when their ethnic identity was activated, but worse when their gender identity was activated, compared with a control group who had neither identity activated. Cross-cultural investigation indicated that it was the stereotype, and not the identity per se, that influenced performance.

The full paper is available here.

In the study, the researchers gave female Asian-American undergrads a math test. However, some were primed to think about the fact that they were women (via questions about whether they lived in a single-sex or coed dorm, and why they would prefer a coed or single-sex floor), and some were primed to think about the fact that they were Asian (via questions about whether their parents/grandparents spoke English, and how many generations they had lived in America). The female-primed group answered 43% of questions correctly, the neutral (no priming questions) answered 49% correctly, and the Asian-primed group answered 53% of questions correctly. That's a pretty large difference! The original study was replicated in 2014 and the same effects were found (on people who are aware of the stereotypes).

So back to our interview! It's possible that by asking a well-intentioned icebreaker question about the candidate's name, we were actually priming their racial identity, and leading them to do worse in an interview. We accidentally emphasized the difference between the candidate and ourselves (white people with names like "Kevin"). We talked about it afterwards, and shared around a link to the paper above, in the hopes that we are giving every candidate the same playing field.

This stuff is hard to get right, you have to think about it, and you have to be aware of the research. We're not perfect, but we know these effects exist and we're trying hard to fight them.

Liked what you read? I am available for hire.

Stepping Up Your Pull Request Game

Okay! You had an idea for how to improve the project, the maintainers indicated they'd approve it, you checked out a new branch, made some changes, and you are ready to submit it for review. Here are some tips for submitting a changeset that's more likely to pass through code review quickly, and make it easier for people to understand your code.

A Very Brief Caveat

If you are new to programming, don't worry about getting these details right! There are a lot of other useful things to learn first, like the details of a programming language, how to test your code, how to retrieve data you need, then parse it and transform it into a useful format. I started programming by copy/pasting text into the WordPress "edit file" UI.

Write a Good Commit Message

A big, big part of your job as an engineer is communicating what you're doing to other people. When you communicate, you can be more sure that you're building the right thing, you increase usage of things that you've built, and you ensure that people don't assign credit to someone else when you build things. Part of this job is writing a clear message for the rest of the team when you change something.

Fortunately you are probably already doing this! If you write a description of your change in the pull request summary field, you're already halfway there. You just need to put that great message in the commit instead of in the pull request.

The first thing you need to do is stop typing git commit -m "Updated the widgets" and start typing just git commit. Git will try to open a text editor; you'll want to configure this to use the editor of your choice.

A lot of words have been written about writing good commit messages; Tim Pope wrote my favorite post about it.

How big should a commit be? Bigger than you think; I rarely use more than one commit in a pull request, though I try to limit pull requests to 400 lines removed or added, and sometimes break a multiple-commit change into multiple pull requests.

Logistics

If you use Vim to write commit messages, the editor will show you if the summary goes beyond 50 characters.

If you write a great commit message, and the pull request is one commit, Github will display it straight in the UI! Here's an example commit in the terminal:

And here's what that looks like when I open that commit as a pull request in Github - note Github has autofilled the subject/body of the message.

Note if your summary is too long, Github will truncate it:

Review Your Pull Request Before Submitting It

Before you hit "Submit", be sure to look over your diff so you don't submit an obvious error. In this pass you should be looking for things like typos, syntax errors, and debugging print statements which are left in the diff. One last read through can be really useful.

Everyone struggles to get code reviews done, and it can be frustrating for reviewers to find things like print statements in a diff. They might be more hesitant to review your code in the future if it has obvious errors in it.

Make Sure The Tests Pass

If the project has tests, try to verify they pass before you submit your change. It's annoying that Github doesn't make the test pass/failure state more obvious before you submit.

Hopefully the project has a test convention - a make test command in a Makefile, instructions in a CONTRIBUTING.md, or automatic builds via Travis CI. If you can't get the tests set up, or it seems like there's an unrelated failure, add a separate issue explaining why you couldn't get the tests running.

If the project has clear guidelines on how to run the tests, and they fail on your change, it can be a sign you weren't paying close enough attention.

The Code Review Process

I don't have great advice here, besides a) patience is a virtue, and b) the faster you are to respond to feedback, the easier it will go for reviewers. Really you should just read Glen Sanford's excellent post on code review.

Ready to Merge

Okay! Someone gave you a LGTM and it's time to merge. As a part of the code review process, you may have ended up with a bunch of commits that look like this:

These commits are detritus, the prototypes of the creative process, and they shouldn't be part of the permanent record. Why fix them? Because six months from now, when you're staring at a piece of confusing code and trying to figure out why it's written the way it is, you really want to see a commit message explaining the change that looks like this, and not one that says "fix tests".

There are two ways to get rid of these:

Git Amend

If you just did a commit and have a new change that should be part of the same commit, use git amend to add changes to the current commit. If you don't need to change the message, use git amend --no-edit (I map this to git alter in my git config).

Git Rebase

You want to squash all of those typo fix commits into one. Steve Klabnik has a good guide for how to do this. I use this script, saved as rb:

    local branch="$1"
    if [[ -z "$branch" ]]; then
        branch='master'
    fi
    BRANCHREF="$(git symbolic-ref HEAD 2>/dev/null)"
    BRANCHNAME=${BRANCHREF##refs/heads/}
    if [[ "$BRANCHNAME" == "$branch" ]]; then
        echo "Switch to a branch first"
        exit 1
    fi
    git checkout "$branch"
    git pull origin "$branch"
    git checkout "$BRANCHNAME"
    if [[ -n "$2" ]]; then
        git rebase "$branch" "$2"
    else
        git rebase "$branch"
    fi
    git push origin --force "$BRANCHNAME"

If you run that with rb master, it will pull down the latest master from origin, rebase your branch against it, and force push to your branch on origin. Run rb master -i and select "squash" to squash your commits down to one.

As a side effect of rebasing, you'll resolve any merge conflicts that have developed between the time you opened the pull request and the merge time! This can take the headache away from the person doing the merge, and help prevent mistakes.

Liked what you read? I am available for hire.

Speeding up test runs by 81% and 13 minutes

Yesterday I sped up our unit/integration test runs from 16 minutes to 3 minutes. I thought I'd share the techniques I used during this process.

  • We had a hunch that an un-mocked network call was taking 3 seconds to time out. I patched this call throughout the test code base. It turns out this did not have a significant effect on the runtime of our tests, but it's good to mock out network calls anyway, even if they fail fast.

  • I ran a profiler on the code. Well that's not true, I just timed various parts of the code to see how long they took, using some code like this:

    import datetime
    start = datetime.datetime.now()
    some_expensive_call()
    total = (datetime.datetime.now() - start).total_seconds()
    print "some_expensive_call took {} seconds".format(total)
    

    It took about ten minutes to zero in on the fixture loader, which was doing something like this:

    def load_fixture(fixture):
        model = find_fixture_in_db(fixture['id'])
        if not model:
            create_model(**fixture)
        else:
            update_model(model, fixture)
    

    The call to find_fixture_in_db was doing a "full table scan" of our SQLite database, and taking about half of the run-time of the integration tests. Moreover in our case it was completely unnecessary, as we were deleting and re-inserting everything with every test run.

    I added a flag to the fixture loader to skip the database lookup if we were doing all inserts. This sped up observed test time by about 35%.

  • I noticed that the local test runner and the Jenkins build runner were running different numbers of tests. This was really confusing. I ended up doing some fancy stuff with the xunit xml output to figure out which extra tests were running locally. Turns out, the same test was running multiple times. The culprit was a stray line in our Makefile:

    nosetests tests/unit tests/unit/* ...
    

    The tests/unit/* change was running all of the tests in compiled .pyc files as well! I felt dumb because I actually added that tests/unit/* change about a month ago, thinking that nosetests wasn't actually running some of the tests in subfolders of our repository. This change cut down on the number of tests run by a factor of 2, which significantly helped the test run time.

  • The Jenkins package install process would remove and re-install the virtualenv before every test run, to ensure we got up-to-date dependencies with every run. Well that was kind of stupid, so instead we switched to running

    pip install --upgrade .
    

on our setup.py file, which should pull in the correct version of dependencies when they changed (most of them are specified either with double-equals, == or greater-than, >=, signs). Needless to say, skipping the full test run every time saved about three to four minutes.

  • I noticed that pip would still uninstall and reinstall packages that were already there. This happened for two reasons. One, our Jenkins box is running an older version of pip, which doesn't have this change from pip 1.1:

    Fixed issue #49 - pip install -U no longer reinstalls the same versions of packages. Thanks iguananaut for the pull request.

    I upgraded the pip and virtualenv versions inside of our virtualenv.

    Also, one dependency in our tests/requirements.txt would install the latest version of requests, which would then be overridden in setup.py by a very specific version of requests, every single time the tests ran. I fixed this by explicitly setting the requests version in the tests/requirements.txt file.

That's it! There was nothing major that was wrong with our process, just fixing the way we did a lot of small things throughout the build. I have a couple of other ideas to speed up the tests, including loading fewer fixtures per test and/or instantiating some objects like Flask's test_client globally instead of once per test. You might not have been as dumb as we were but you'll likely find some speedups if you check your build process as well.

Liked what you read? I am available for hire.

Eliminating more trivial inconveniences

I really enjoyed Sam Saffron's post about eliminating trivial inconveniences in his development process. This resonated with me as I tend to get really distracted by minor hiccups in the development process (page reload taking >2 seconds, switch to a new tab, etc). I took a look at my development process and found a few easy wins.

Automatically run the unit tests in the current file

Twilio's PHP test suite are really slow - we're sloppy about trying to have unit tests avoid hitting the disk, which means that the suite takes a while to run. I wrote a short vim command that will run only the tests in the current file. This tends to make the test iteration loop much, much faster and I can run the entire suite of tests once the current file is passing. The <leader> function in Vim is excellent and I recommend you become familiar with it.

nnoremap <leader>n :execute "!" . "/usr/local/bin/phpunit " . bufname('%') . ' \| grep -v Configuration \| egrep -v "^$" '<CR>

bufname('%') is the file name of the current Vim buffer, and the last two commands are just grepping away output I don't care about. The result is awesome:

Unit test result in vim

Auto reloading the current tab when you change CSS

Sam has a pretty excellent MessageBus option that listens for changes to CSS files, and auto-refreshes a tab when this happens. We don't have anything that good yet but I added a vim leader command to refresh the current page in the browser. By the time I switch from Vim to Chrome (or no time, if I'm viewing them side by side), the page is reloaded.

function! ReloadChrome()
    execute 'silent !osascript ' . 
                \'-e "tell application \"Google Chrome\" " ' .
                \'-e "repeat with i from 1 to (count every window)" ' .
                \'-e "tell active tab of window i" ' . 
                \'-e "reload" ' .
                \'-e "end tell" ' .
                \'-e "end repeat" ' .
                \'-e "end tell" >/dev/null'
endfunction

nnoremap <leader>o :call ReloadChrome()<CR>:pwd<cr>

Then I just hit <leader>o and Chrome reloads the current tab. This works even if you have the "Developer Tools" open as a separate window, and focused - it reloads the open tab in every window of Chrome.

Pushing the current git branch to origin

It turns out that the majority of my git pushes are just pushing the current git branch to origin. So instead of typing git push origin <branch-name> 100 times a day I added this to my .zshrc:

    push_branch() {
        branch=$(git rev-parse --symbolic-full-name --abbrev-ref HEAD)
        git push $1 $branch
    }
    autoload push_branch
    alias gpob='push_branch origin'

I use this for git pushes almost exclusively now.

Auto reloading our API locally

The Twilio API is based on the open-source flask-restful project, running behind uWSGI. One problem we had was changes to the application code would require a full uWSGI restart, which made local development a pain. Until recently, it was pretty difficult to get new Python code running in uWSGI besides doing a manual reload - you had to implement a file watcher yourself, and then communicate to the running process. But last year uWSGI enabled the py-auto-reload feature, where uWSGI will poll for changes in your application and automatically reload itself. Enable it in your uWSGI config with

py-auto-reload = 1   # 1 second between polls

Or at the command line with uwsgi --py-auto-reload=1.

Conclusion

These changes have all made me a little bit quicker, and helped me learn more about the tools I use on a day to day basis. Hope they're useful to you as well!

Liked what you read? I am available for hire.

Disability benefits: the new unemployment checks

This American Life is an excellent podcast, but occasionally puts out episodes on subjects I don't care for - fiction, reminisces about home life, etc. There is one heuristic you should use for filtering American Life podcasts: listen to the podcasts they release that tell one story for the whole hour.

Example whole-hour podcasts, that are great stories: the NUMMI plant in Fremont, a story about Amanda Williams and juvenile justice in Georgia, a story on the Social Contract and why it's so hard to fix the country's current budget problems.

The latest episode of This American Life spans an entire episode, and is similarly excellent. Ostensibly it's about healthcare in the US, but the true story is about a class of US citizens who are no longer fit for the workplace, and the steps they're taking to cope.

Occupational change over time is completely normal, and in fact, a very good thing for everyone. At one point in time, 98% of US workers were farmers. Imagine if the government had implemented protective measures for jobs in farming that were at risk of disappearing, as farming tools got better and workers became more productive. It would have prolonged the use of inefficient farming techniques and delayed moves into more productive industries.

Historically, sectoral shifts in the US economy have been handled without too much disruption to society. Workers retire in less productive sectors, and new graduates enter in promising industries. Of course in individual instances a mill may shut down and leave people without a job but on the whole it's worked out okay.

Lately there's been lots of evidence that the economy is starting to shift much faster than the retirement/new entry process can adjust to. The result is a giant swath of society that is unable to contribute in a meaningful way, or earn their keep. This American Life focuses in on this group of people, currently numbering in the tens of millions (as well as the group of rent seekers catering to this group). I'd suggest you tune in, because this problem is not going away.

I don't have solutions or criticism; the story is more sad than anything. You should be tuned into what is happening with the workforce in the US today, especially when most of us live in areas surrounded by people that share our socioeconomic background and status.

I'd encourage you to read Kevin Kelly's recent post on The Post-Productive Economy. It's one view of where we might be heading.

Liked what you read? I am available for hire.

Helping Beginners Get HTML Right

If you've ever tried to teach someone HTML, you know how hard it is to get the syntax right. It's a perfect storm of awfulness.

  • Newbies have to learn all of the syntax, in addition to the names of HTML elements. They don't have the pattern matching skills (yet) to notice when their XML is not right, or the domain knowledge to know it's spelled "href" and not "herf".

  • The browser doesn't provide feedback when you make mistakes - it will render your mistakes in unexpected and creative ways. Miss a closing tag and watch your whole page suddenly acquire italics, or get pasted inside a textarea. Miss a quotation mark and half the content disappears. Add in layouts with CSS and the problem doubles in complexity.

  • Problems tend to compound. If you make a mistake in one place and don't fix it immediately, you can't determine whether future additions are correct.

This leads to a pretty miserable experience getting started - people should be focused on learning how to make an amazingly cool thing in their browser, but instead they get frustrated trying to figure out why the page doesn't look right.

Let's Make Things A Little Less Awful

What can we do to help? The existing tools to help people catch HTML mistakes aren't great. Syntax highlighting helps a little, but sometimes the errors look as pretty as the actual text. XML validators are okay, but tools like HTML Validator spew out red herrings as often as they do real answers. Plus, you have to do work - open the link, copy your HTML in, read the output - to use it.

We can do better. Most of the failures of the current tools are due to the complexity of HTML - which, if you are using all of the features, is Turing complete. But new users are rarely exercising the full complexity of HTML5 - they are trying to learn the principles. Furthermore the mistakes they are making follow a Pareto distribution - a few problems cause the majority of the mistakes.

Catching Mistakes Right Away

To help with these problems I've written an validator which checks for the most common error types, and displays feedback to the user immediately when they refresh the page - so they can instantly find and correct mistakes. It works in the browser, on the page you're working with, so you don't have to do any extra work to validate your file.

Best of all, you can drop it into your HTML file in one line:

</p>

<script type="text/javascript" src="https://raw.github.com/kevinburke/tecate/master/tecate.js"></script>

<p>

Then if there's a problem with your HTML, you'll start getting nice error messages, like this:

error message

Read more about it here, and use it in your next tutorial. I hope you like it, and I hope it helps you with debugging HTML!

It's not perfect - there are a lot of improvements to be made, both in the errors we can catch and on removing false positives. But I hope it's a start.

PS: Because the browser will edit the DOM tree to wipe the mistakes users make, I have to use raw regular expressions to check for errors. I have a feeling I will come to regret this. After all, when parsing HTML with regex, it's clear that the <center> cannot hold. I am accepting this tool will give wrong answers on some HTML documents; I am hoping that the scope of documents turned out by beginning HTML users is simple enough that the center can hold.

Liked what you read? I am available for hire.

How to talk to recruiters at a career fair

Last week two other Twilio engineers and I went to the Columbia engineering career fair. We had a great time and talked to a lot of really smart people. However I was surprised at some of the naive mistakes students made when we were talking. We're there to try to hire students and students are there to try and get internships and jobs, so we desperately want it to work out. As a student, here are some things to avoid when you are talking to a recruiter.

  • Are you hiring full time software engineers? I am looking for a full time position, here is my resume - This question demonstrates a high degree of naïveté. First, the market for programming talent is absolutely on fire right now and everyone is looking for talented people. It's also safe to assume we are at the career fair because we are trying to recruit people for full time positions.

    Second, especially as a small company, we are looking for people who are passionate about what we are doing. It's good to talk about cool things you've done, but at some level you have to express interest in what we are doing, or tie your skills back to how they'd fit in at our company.

    Maybe fifteen people asked me this "are you hiring" question during the fair and most of them were ESL students. I know these students are very bright, and I understand it can be nerve wracking for them to speak to recruiters, but it was very difficult for us to get a read on whether someone would be a good fit, based on how the conversations went. For us, that translates to a "no phone screen".

  • I love programming Java - This is a red flag for us. There is nothing wrong with Java per se, but it's a language that's hard to get excited about. It's also the language students have been using for class, which tells us that you might not do that much programming/learning outside of class (another red flag).

    The other big problem is Java probably has the worst signal to noise ratio of any language on students' resumes nowadays; a higher ratio of students with, say, Haskell experience are good candidates than students who mention Java. Two people who have covered this topic in much better detail than me are Paul Graham and Yaron Minsky.

    The one exception to this rule is if you have experience with advanced Java programs like Hadoop or Asterisk (we are a telecom company and use Asterisk extensively). In this case definitely tell us about your Java experience.

  • You're wearing a suit At some career fairs almost everyone is wearing suits so it's not a big deal if you are also wearing one. That said, I went out of my way to talk to people wearing jeans because a) it indicated they're not interested in jobs where they'd like to see you wearing a suit, and b) it indicated they were confident enough in yourself and your skill set to ignore the vast numbers of people wearing suits. So at least if you are looking for a job at a small tech company, don't be afraid to wear something more comfortable.

  • I don't really know what I want to do - Like the suits, you are young and it's fine if you don't know what you want to do yet. The problem is that we have three specific teams and while we are talking, I am trying to figure out which team you would be the best fit for. Sometimes this can be hard and good people will fall through because we don't know where to put you. It really helps if you express a strong preference that matches the skills on your resume, then I know what team to have you interview with and have a sense you'll be happy.

    If you are not sure, one good strategy is to rotate - tell one company you'd like to do frontend work, tell the next you'd like to work on mobile and tell the third company you'd like to work on big data. Next summer (or in your spare time) you can try out something different.

  • Here's my resume, looking forward to a phone screen - Even if you are an exceptional candidate, this will not get you a phone screen at most small companies. Here is the deal. The Google, Facebook and LinkedIn recruiters have little say in whether you get an interview or not. They are there to hand out pens, collect resumes and answer basic questions about internships/full time opportunities. You'll be encouraged to apply online.

    When you talk to smaller companies at career fairs, the engineers you are talking to are actively evaluating you and making decisions about whether to advance you through the screening process. The stakes are much higher and you should consider it like a culture fit interview. You should do your research, try out the company's product, read through blog posts to figure out what sort of stack they use, etc. Then when you are talking, be sure to bring up parts of your experience that match up. For example, green flags for us are when students mention experience at hackathons, experience using Twilio in the past, or experience with HTTP/API's/writing web applications. So don't expect you can just drop a resume and get an interview - at a small company that won't fly.

Hopefully these help somewhat. Two things to keep in mind are, these are things we look for as a small Internet company - if you want to work at an Internet giant, or in a different industry, my advice would be much different. Second, I might have sounded critical above, but we all really want you to do well - we love talking to great people, and it makes interviewing and screening much more pleasant. Third, we're hiring! I hope you found this article helpful, and I hope you check us out - twilio.com/jobs.

Liked what you read? I am available for hire.

How one of the all time great magicians thinks about experiences

I was fascinated by this article in Esquire about Teller, half of the magic duo Penn and Teller, because of its description of how Teller thinks about his craft. I especially liked this story:

When Teller was in high school, [a teacher] read a short story to those few students before him, including an enraptured Teller: "Enoch Soames," by Max Beerbohm, written in 1916.

In the story, Beerbohm relates the tragic tale of Soames, a dim, hopeless writer with delusions of future grandeur. In the 1890s, Beerbohm recounts, Soames made a deal with the devil: In exchange for his soul, Soames would be magically transported one hundred years into the future — to precisely 2:10 P.M. on June 3, 1997 — into the Round Reading Room at the British Museum. There, he could look at the shelves and through the catalogs and marvel at his inevitable success. When Soames makes his trip, however, he learns that time has almost erased him before the devil has had the chance. He is listed only as a fictional character in a short story by Max Beerbohm.

Thirty-four-and-a-half years after that snowy reading by his satanic-looking teacher... Teller flew to England ahead of June 3, 1997.

As it turned out, there were about a dozen people in the Round Reading Room that afternoon... And at ten past two, they gasped when they saw a man appear mysteriously out of the stacks, looking confused as he scanned empty catalogs and asked unhelpful librarians about his absence from the files. The man looked just like the Soames of Teller's teenage imagination, "a stooping, shambling person, rather tall, very pale, with longish and brownish hair," and he was dressed in precise costume, a soft black hat and a gray waterproof cape. The man did everything Enoch Soames did in Max Beerbohm's short story, floating around the pin-drop-quiet room before he once again disappeared into the shelves.

And all the while, Teller watched with a small smile on his face. He didn't tell anyone that he might have looked through hundreds of pages in casting books before he had found the perfect actor. He didn't tell anyone that he might have visited Angels & Bermans, where he had found just the right soft black hat and gone through countless gray waterproof capes. He didn't tell anyone that he might have had an inside friend who helped him stash the actor and his costume behind a hidden door in the stacks. Even when Teller later wrote about that magical afternoon for The Atlantic, he didn't confess his role. He never has.

"Taking credit for it that day would be a terrible thing — a terrible, terrible thing," Teller says. "That's answering the question that you must not answer."

Now, again, his voice leaves him. That afternoon took something close to actual sorcery, following years of anticipation and planning. But more than anything, it required a man whose love for magic is so deep he can turn deception into something beautiful.

Years of preparation for something only twelve people would see. Read Teller's account of the day in 1997, or the full profile of Teller in Esquire

Liked what you read? I am available for hire.

Why I’m not switching my bank to Simple

I finally got my Simple invite. Simple is an online banking startup that wants to make the experience of using banks much, much better. As far as I can tell they are delivering on that experience - their site and iPhone app are a joy to use. They also focus on the whole experience - the ATM card that came in the mail was delivered in a beautiful package, and the signup process, which requires gathering a ton of private information about you, was painless.

That said, I realized shortly after signing up that I'm going to be sticking with Ally Bank for one simple reason: they refund ATM fees. I used to hate ATM's. The option I had with ATM's was either a) look up the nearest in-network ATM online, then walk three blocks, all the while wondering why I are doing this to save a measly $3; or b) go to the liquor store ATM and hate myself for spending money to get at my money.

ATM fee refunds turned an experience I hated into something I can brag about ("It doesn't matter, I get the money back at the end of the month"). I get about $120 in ATM refunds every year, as well as the time and stress savings trying to find an in-network ATM saved, easily worth two or three percentage points of interest. I don't know how it makes business sense for Ally to pay for my ATM fees, but it's their killer feature.

I appreciate what Simple is doing and wish them the best. If you are still saddled with a card from a bank with physical branches, switching to Simple is one of the best things you could do. However at the moment Ally's ATM fee refunds (a better ATM user experience) over Simple's awesome website and iPhone app (a better web experience).

Liked what you read? I am available for hire.

What will happen to house prices in the Bay Area after the Facebook IPO?

A lot of people seem to think that house prices in the Bay Area will rise significantly after the Facebook IPO. It's fun to speculate about, but those people seem to be assuming a lot. Here are some of those assumptions I'm not so sure about:

  1. Facebook will add a significant number of new millionaires to the Bay Area.

  2. Everyone that benefits from the Facebook IPO will want to invest that money in a house, instead of into the stock market, retirement, cars, etc.

  3. Everyone that wants to invest their option cash in a house will buy shortly after they liquidate their options.

  4. Most of Facebook's new millionaires will buy houses in the Bay Area.

  5. The supply of housing for multi-millionaires is inelastic.

  6. The people buying new houses will not be vacating their old ones.

I'm not so sure that those assumptions are good ones. A San Francisco Chronicle article from 2009 said that there were 136,000 millionaires in the Bay Area in 2009, a number that has surely risen since then. Facebook has 3500+ employees, and on the high side I would guess maybe 1500 are going to earn enough money from the IPO to change their lifestyle, This would add about 1% to the Bay's total, assuming all of Facebook's newly minted millionaires live in the Bay Area.

If anything the prices of houses at the very high end (10 million plus) will rise. But it's hard to feel very sorry for people that are priced out of that market. If anything a housing shortage here may help remove or loosen some of the Bay's many restrictive housing and zoning laws.

Liked what you read? I am available for hire.