Sunday, June 22, 2014

A beautiful game vs Portugal

Surely today's match was not the best effort of either team. There were missed passes, mistakes, and plenty of ups and downs.

But what a thriller! What entertainment!

What a shame the game wasn't 15 seconds shorter :)

The U.S. must count themselves extremely fortunate to have 4 points from these first two matches.

The hardest match of the three awaits, however, as Germany have 4 points as well. Since all 4 teams in the group are still alive, the final matches will be exciting and energetic.

And now we will see if Klinsmann's knowledge of his home country does indeed provide the U.S. with that extra bit of advantage to give us a hope, a chance.

I know where I'll be, bright and early on Thursday morning. :)

Saturday, June 21, 2014

One game at a time!

All the stupid U.S. media are already talking about the Germany match.

First we have to play Portugal!

One thing at a time.

I hope they play Wondo, just because he's a bit of a local talent.

And he was a Chico alum, though a few years before my son got there.

One way or another, I'm definitely looking forward to tomorrow's match; may they all play well.

Thursday, June 19, 2014

MLS and the vanishing spray

Many of my friends were surprised when the referees began pulling out the vanishing spray.

But it's been used in MLS for some time now, so we Yankee fans were not startled at all; we think it's a great tool.

Of course, we Yankee fans like technology in our sports. I guess that's just how we roll.

Anyway: World Cup: What Is That Foaming Spray Used by Refs?

The vanishing spray contains a mixture of butane, isobutane and propane gas; a foaming agent; water; and other chemicals. When it leaves the can, the gas depressurizes and expands, creating small, water-covered droplets on the field. The butane mixture later evaporates, leaving only water and surfactant residue behind.

You can see the patent, too.

A foaming composition for generating temporary indications, preferably to mark defensive wall lines and spots for free kick shootout in football, where the foam remains stable over a very short but sufficient period of time to take the shot, wherein the composition comprises a propellant, a foaming emulsifier, a cation chelating agent, a preservative and water.

Tuesday, June 17, 2014

I guess I wasn't the only one who saw it that way...

... a few others did, as well, including a few who dipped into the unusual back-story about Ochoa being the reason that the entire Mexican team swore off meat for the entire tournament:

Scoreless excitement

If you're one of those sports fans who thinks a 0 - 0 tie must be the most boring outcome possible, I encourage you to go track down the highlights of Brazil - Mexico 2014, and watch the ASTOUNDING performance of Mexican goalkeeper Guillermo Ochoa.

Time and time again Brazil thunder down the field at breakneck speed, the ball dashing and flitting about, and time and time again Ochoa is just where he needs to be, perfectly positioned, denying every Brazilian chance.

After 14 decisive matches, the last 24 hours have delivered two scoreless draws (Iran v Nigeria, and now Brazil v Mexico).

But the excitement level hasn't abated in the least.

Mexico Keeper Guillermo Ochoa Denies Brazil's Neymar with Wonder-Save

Monday, June 16, 2014

What a glorious start!

The injury impact is substantial; not just Altidore's leg, but Dempsey's broken nose, and the leg injuries to Besler and Bedoya.

And what is wrong with Michael Bradley? He just didn't look right, all game. Thank goodness Jermaine Jones played out of his head, and Kyle Beckerman was his usual unflappable self. Was it just a formation thing? Or is Bradley ill or injured?

But what a game! Exciting from start to finish, my heart in my throat, as wave after wave attacked and attacked and attacked.

The equalizer seemed inevitable, but then they didn't fade, and as Brooks rose to meet Zusi's perfect corner pass, my triumphant shout must have alarmed not just the neighbors but the entire neighborhood!

Six Days To Portugal.

I suspect Klinsmann will just send everybody to their rooms and make them sleep for 48 hours; we fans need that at least ourselves.

But there is no let-up for the fans, as (gasp!) Brazil-Mexico awaits tomorrow...

Sunday, June 15, 2014

The International Sun-Earth Explorer-3

I loved this story in the weekend paper: Calling Back a Zombie Ship From the Graveyard of Space.

For 17 years, it has been drifting on a lonely course through space. Launched during the disco era and shuttered by NASA in 1997, the spacecraft is now returning to the civilization that abandoned it.

It's a wonderful story of long-term thinking and innovative use of modern technology.

Just consider this amazing outcome from a plan set in place 28 years ago:

After the successful Giacobini-Zinner flyby, ISEE-3 still had ample fuel, so three rocket burns in 1986 set it on a course to zoom about 30 miles above the surface of the moon 28 years later, on Aug. 10, 2014. The gravitational pull of the lunar flyby would swing ISEE-3 into orbit around Earth.

The astonishing thing is, it worked! Although, it may need a bit of refinement:

Mr. Wingo has now persuaded NASA to use the Deep Space Network to pinpoint ISEE-3’s trajectory, to calculate the rocket burn required to put it on a path to Earth orbit. Dr. Farquhar’s 1986 calculations were close, but not exact. Slight errors are magnified over time, and now the uncertainty is 20,000 miles, which means the spacecraft could be on course to splat into the moon.

So, in these days of frantic innovation, how do you manage to interoperate with a machine designed and launched more than 35 years ago?

Recent advances in what are called software-defined radios allowed the team to build a new transmitter and install it on the Arecibo telescope within a few weeks, much more quickly and cheaply than would have been possible a few years ago.

So cheaply, in fact, that large parts of the project were crowd-funded:

On RocketHub, a crowdfunding website, they asked for $125,000 to help pay the costs. They collected nearly $160,000, from 2,238 donors.

All in all, it's great news. You can follow the project on their Facebook page.

Early reflections on this year's cup

I hope you've been enjoying the matches, for they've been super, super, super.

A few thoughts, from my parochial American perspective:

  • Decisive play! The first 8 matches have all been decisive, not a draw in the bunch.
  • And high scoring! It seems like we are seeing much more attacking, much higher level of scoring than I expected
  • Good officiating. People will always complain, but all the games I've seen have flowed well, with referees present to keep the matches fair and honest, but not interfering to the point that the referees become a topic worth discussing. Since complaining about the ref is the top thing that soccer fans do, I don't expect this to continue, but it's been a great start in this respect.
  • Good conditions. After all the stories about the weather, the state of the facilities, and so forth, the conditions of play have been superb. The fields are well-prepared and the conditions have allowed an extremely high level of play
  • No weak teams. The only game that seemed like a blowout was Spain-Netherlands; every other game has been close, much closer than the scores might indicate.

If play continues at this level of quality and excitement, this will surely turn out to be one of the great cups of all time.

Ole!

Tuesday, June 10, 2014

Ole Segunda Parte

48 hours to go! Oh my!

From the great folks at Zonal Marking, groups E through H:

  • France: surprisingly promising
    Particularly notable in the warm-up matches has been France’s midfield press. Pogba and Matuidi are both extremely powerful, energetic players comfortable pushing up and shutting down the opposition, which works well in combination with a centre-back pairing happy playing high up the pitch, and the best sweeper-keeper around.
  • Honduras: physical, but little more
    However, this shouldn’t hide the fact that Honduras have performed extremely well to reach their second consecutive World Cup. This is a poor country with a very small population, and yet they finished above Mexico in the CONCACAF final qualification group, winning 2-1 in Azteca, a genuinely fantastic result.
  • Ecuador: the most basic side?
    The inescapable truth about Ecuador is that they’re primarily at this tournament because their home qualifiers are played at altitude.
  • Switzerland: true dark horses
    They’ve always boasted good organisation, but have lacked quality in attacking positions to record victories. That might have changed. Switzerland have a superb generation of young talent, summed up by the fact their four forwards are aged 21, 22, 23 and 24, and their first-choice attacking midfielders 21 and 25. If Switzerland can keep their traditional defensive structure while successfully introducing attacking invention, they have all the qualities required to succeed.
  • Bosnia: more cautious than expected
    Their strength is still the final third – their best two players are their number ten and their number nine. But there’s little to suggest Bosnia will be any more adventurous than average, which seems a great shame.
  • Nigeria: midfield questions…
    Onazi could do with someone behind him, and it’s odd that Keshi seems so determined to field a midfield in this format – with two deep and one ahead, when the two don’t look comfortable together deep, and there’s no obvious candidate to play just ahead. It’s hard to see Nigeria dominating matches with these problems, although their usual approach is to sit deep anyway.
  • Iran: frustrating to watch, frustrating to play against
    Queiroz will drill Iran relentlessly on the training ground. “Carlos was obsessive about stopping [the opposition],” said Gary Neville in his autobiography, remembering Queiroz’s time as Manchester United’s assistant coach. “We’d never seen such attention to detail. He’d put sit-up mats on the training pitch to mark exactly where he wanted the players to be, to the nearest yard. We rehearsed time and time again, sometimes walking through the tactics slowly with the ball in our hands.”
  • Argentina: big strengths, big weaknesses
    Alejandro Sabella has a system, favoured personnel, and will stick to his beliefs. His starting XI in the group stage will be his eleven most-selected players throughout qualification , which sounds obvious, but it’s rare for international managers to remain so committed to players over such a long period.
  • USA: a diamond midfield
    The United States are expected to add to this variety by using a diamond midfield, which might be unique among the 32 teams. Jurgen Klinsmann has spent recent weeks telling the press that the formation doesn’t matter, but the switch to the diamond in April’s 2-2 friendly draw against Mexico was a significant move, and was designed to bring the best out of the USA’s outstanding player, Michael Bradley.
  • Portugal: the same as usual
    Cristiano Ronaldo is cutting inside from the left, and his performance in the play-off against Sweden, when he scored a sublime counter-attacking hattrick, shows how Portugal have rightly based their side entirely around the Ballon d’Or holder. Ronaldo’s international form over the past 18 months has been the best of his career, and a little like Brazil’s set-up (with Fred primarily in the side to bring the best out of Neymar) Helder Postiga is his foil.
  • Ghana: still great on the break
    Four years on, the side remains very familiar. The 2010 squad was packed with youth, and therefore it’s no surprise that the majority of players have retained their places as they’ve gained more experience. But as Ghana’s reputation has grown, they’ve been forced to adapt to different challenges. When they were the underdogs, they could sit back, remain compact and counter-attack extremely swiftly. Now opponents are aware of that threat, they’re forced to become more proactive, but lack the creativity and incision to dominate games and score goals.
  • Germany: need the right combination upfront
    Gotze has occasionally done OK in the false nine role and combined nicely with Ozil, but it hasn’t been flawless – Gotze coming towards the ball and Ozil breaking into the space is great aesthetically, but there’s no great goal threat. Muller could be pushed upfront, of course, but in a way this could bring less variety to the attacking quartet.
  • Belgium: can they succeed without proper full-backs?
    Having been something of an irrelevance on the biggest stage just four years ago, they suddenly find themselves with an impressive generation of top-class players at Europe’s biggest clubs.

    As a result, they’ve been cited as the competition’s ‘dark horse’ by many. That term doesn’t really make sense, though – for a start, Belgium are the fifth-favourites, and considered more likely to triumph than the likes of Italy or France. More importantly, Belgium haven’t performed well enough to suggest they’re as good as the sum of their parts.

  • South Korea: organised but prone to mistakes
    Korea’s problem is the lack of a top-class striker. Kim-Shin-wood is stylistically no more than a Plan B in this side, and Park Chu-young remains something of a mystery – signed by Arsenal three years ago, but barely noticeable and twice loaned out. He’s still first-choice for Korea, but the Arsenal failure prompted a dramatic decline in his goalscoring ability at international level, too – he has 24 international goals, but just one since November 2011. He’s playing in a group with some strong centre-backs, and therefore it’s difficult to see him scoring goals.
  • Algeria: young and mobile
    Coach Vahid Halilhodzic has the side well organised, but also committed to playing good attacking football, with plenty of movement amongst the front three, and a mobile, young and technically proficient midfield trio too. Algeria are receiving less attention than the other four African sides in this competition, probably because of the lack of star names – but they’re actually the highest-ranked African side in the latest FIFA rankings.
  • Russia: Shirokov a huge loss
    Whereas Russia played primarily on the counter-attack at Euro 2012, taking advantage of the strong Zenit connection in the side, Capello has favoured different players and there’s no longer such cohesion about Russia, and little rotation of the midfield triangle. In fact, it’s difficult to understand this side’s specialism – the defensive is underwhelming, they don’t keep possession particularly well, and the counter-attacking threat is no more than decent. When you factor in the suspicion Capello might be building for the 2018 World Cup on home soil, it’s difficult to find reasons to back them.

Monday, June 9, 2014

Ole! Ole Ole Ole! (Part 1)

There's less than 72 hours to go, so it's time to get serious.

Are you still having trouble naming the 32 countries who are competing? Well, don't worry, the team at Zonal Marking have got you covered!

In part 1, they cover the first half of the teams, as follows:

  • Brazil: organised, structured, and the favourites
    The ‘joga bonito’ ideal has always been something of a myth, with Brazil usually boasting a solid backbone and then giving a couple of talented individuals creative license and positional freedom. In that respect, while this isn’t a legendary Brazilian side and it’s easy to yearn for the ‘three Rs’ that dominated Scolari’s 2002 team, it’s actually roughly what we’ve come to expect from Brazil. The central midfielders are extremely basic and functional, the full-backs bomb on, the attackers combine nicely. Nevertheless, it’s also the most ‘European’ side Brazil have ever taken to a World Cup: the shape is 4-2-3-1, the speed of transitions are very quick, the attackers work hard without the ball.
  • Croatia: great midfield guile but what else?
    There are few nations that love playmakers as much as Croatia, and national team coach Niko Kovac appears set to field three together in a highly creative midfield trio.
  • Mexico: talented squad, but highly unpredictable
    By now, they should be capable of pushing on and finally reaching the quarter-final stage. They were genuinely impressive at points in 2010, won the Gold Cup in 2011 with a brilliant 4-2 win over the USA in one of the best international finals in recent memory, then shocked Brazil the next year by winning the Olympic title.
  • Cameroon: need to get the midfield balance right
    Coach Volker Finke was once renowned as an attack-minded coach, but with Cameroon has found himself lacking in the creative midfield department, and therefore has tried to make his side organised, compact and disciplined, and depended upon quick attacking down the flanks. The major question is how he structures his midfield trio, to get the balance between defensive solidity and attacking potential.
  • Spain: can they keep their incredible run going?
    When you consider their distinctive playing style, and the way they’ve taken possession football to the extreme, they’re unquestionably one of the greatest international sides in history. They are, however, more vulnerable than in previous years.
  • Netherlands: still undecided on best formation
    Van Gaal stuck to conventional 4-3-3 and 4-2-3-1 formations throughout qualification, but he’s suddenly decided that these shapes might not suit his players after all. The absence of Kevin Strootman was part of this thinking, but it’s still surprising that he’s tried two completely different shapes in the pre-tournament warm-ups.
  • Australia: building for the future
    Equally problematic is Australia’s draw – they’re in a group alongside the two World Cup finalists from 2010, plus Chile. It will be almost impossible for them to qualify, and they’re unlikely to register a victory. Therefore, Australia seem set to use this tournament as a learning experience ahead of the 2015 Asian Cup on home soil, and while coach Ange Postecoglou will be keen to avoid any thrashings, Australia will play open football and build for the future, rather than attempt to grind out 0-0 draws.
  • Chile: like in 2010, the most attacking side
    Yet Chile continue to be enthralling. In March, they travelled to Stuttgart to play Germany, and absolutely battered them. They had 17 shots to 5, and 7 on target to 3. And yet, predictably, they lost 1-0. It’s almost illogical how a side can dominate games so clearly, yet fail to score.
  • Colombia: absences to prompt tactical re-think?
    Falcao’s absence might change things, however. His role for the national side was strange – sometimes, like at the Copa America in 2011, he was overly keen to become involved in link-up play, but didn’t do it very well. Other times, he stayed upfront and seemed distant from the rest of the side. While a brilliant goalscorer, Falcao wasn’t particularly good at linking play and providing the attacking midfielders with service, and it’s not too much of an exaggeration to suggest Colombia might play better without him.
  • Greece: ten years on, a similar approach
    It’s now ten years since Greece shocked Europe by triumphing at Euro 2004, with three consecutive 1-0 knockout victories against the holders, the best team, and then the hosts.

    It was a genuinely remarkable victory, perhaps the greatest upset in the history of international football, and it feels like Greece are attempting to replicate that formula. From their ten qualification group games, they recorded five 1-0 victories – although they opened up more in the play-off win against Romania.

  • Ivory Coast: need individual magic
    It’s difficult to think of another big nation that has appointed a completely inexperienced foreigner as coach, and it remains difficult to deduce Lamouchi’s managerial style. The Ivory Coast were hugely unimpressive at last year’s Africa Cup of Nations, struggling through the group stage thanks to same late goals, before being defeated at the quarter-final stage by Nigeria. Lamouchi’s plan is clear: 4-3-3, with Yaya Toure given plenty of license to break forward, but the side doesn’t seem particularly cohesive.
  • Japan: good between the boxes
    While Japan don’t always collect good results, they boast great technical quality in the midfield zone. The two most famous stars are Shinji Kagawa and Keisuke Honda, but arguably more important to Japan’s style of play are the two holding midfielders. While their partnership has sometimes been broken up in recent matches, the combination of Yashuito Endo and Makoto Hasebe has proved highly effective over the last few years, and this Japan side are capable of controlling matches against top-class opposition.
  • England: potentially dangerous on the break
    It was more about what that selection symbolised. Were England going to rely upon a member of the ‘golden generation’ yet again, despite their constant failures at World Cups, or were they going to turn to a fresh, exciting, attacking and technically excellent youngster, to evolve the side?
  • Uruguay: past their best?
    Three years ago, Uruguay were unquestionably the best side in South America – they were the only South American side to reach the semi-finals of the World Cup in 2010, before winning the Copa America a year later.

    Their form since then, however, has been extremely poor. Their qualification was a disaster, forced into a play-off against Jordan (which they unsurprisingly won comfortably) and they’ve clearly regressed in the last four years.

  • Costa Rica: extremely defensive
    While the difficult draw was always going to force Costa Rica to play defensively, that’s essentially their favoured style anyway. Their Colombian coach Jorge Luis Pinto, a good tactician who has won the league in four different countries, has generally favoured a cautious system that is 3-4-2-1 on the rare occasions Costa Rica have possession, but in reality more like a 5-4-1. They’re happy for the opposition to have time on the ball, concede space in midfield, and instead pack their penalty box.
  • Italy: Prandelli not sure of his formation
    Italy will continue to play the positive, attack-minded football Prandelli has encouraged since taking charge in 2010, with Andrea Pirlo still the star player in his final World Cup. But there are still lots of question marks, and a few underwhelming options in various positions.

Meanwhile, make sure you know what you're looking for when you watch: How to Watch the World Cup Like a True Soccer Nerd: Understanding the brainy side of the beautiful game

Answer those three questions and you’ll have a general idea about what kinds of players are on the field. What it actually looks like when they play is another question entirely, and it’s one more informed by tactics than by personnel.

Sunday, June 8, 2014

Git gui tools for mid-size repositories

I've been trying out different git gui tools.

Git itself, of course, comes with both gitk and the git gui, but there are a number of other tools as well.

Gitk works quite nicely on small repositories, such as the git repo itself.

I tried various git gui tools on a more substantial repository: a clone of the stable Linux kernel, which is certainly a sizable repository, clocking in at about 1.4 GB in disk space.

When I try to bring up gitk on the Linux kernel repo, it whirs and hums for several minutes, then eventually crashes with an error saying something like "unable to allocate 112 bytes".

As an experiment, I downloaded Atlassian's SourceTree tool. It looks like a beautiful tool, and indeed when I brought it up on the Linux kernel repo, it did not crash.

However, it was so slow as to be unusable; clicking on a menu item took 30 seconds to display the menu.

When I looked with Task Manager, SourceTree had allocated 7 GB of virtual memory and was still climbing.

7 GB of virtual memory? For a 1.4 GB repo?

Is the Linux kernel repo considered "too large to view", nowadays? Are there GUI tools that work with this repo?

Is it just that git tools don't work well on Windows?

If you know, drop me a line...

That's it, it's all over now.

Turing Test Success Marks Milestone in Computing History

The 65 year-old iconic Turing Test was passed for the very first time by supercomputer Eugene Goostman during Turing Test 2014 held at the renowned Royal Society in London on Saturday.

'Eugene', a computer programme that simulates a 13 year old boy, was developed in Saint Petersburg, Russia. The development team includes Eugene's creator Vladimir Veselov, who was born in Russia and now lives in the United States, and Ukrainian born Eugene Demchenko who now lives in Russia.

Of course, this was a fairly specialized program, designed specifically for this purpose:

"Eugene was 'born' in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything. We spent a lot of time developing a character with a believable personality. This year we improved the 'dialog controller' which makes the conversation far more human-like when compared to programs that just answer questions. Going forward we plan to make Eugene smarter and continue working on improving what we refer to as 'conversation logic'."

Still, quite a result.

Friday, June 6, 2014

Apropos of nothing

I'm just all over the place recently, a real medley of randomness.

Perhaps it's because the World Cup is LESS THAN ONE WEEK AWAY!!!!

  • Explore Every 2014 World Cup Stadium With Google Street View
    Unidade Gestora do Projeto Copa, the Brazilian organization in charge of building the stadiums, built 7 new stadiums, and renovated 5 more to create a wide palette of futebol experiences. You can see all 12 in the gallery above. Each is unique, but the stadiums can start to blur together in the excitement of the tournament, so head over to Google Maps’ World Cup Stadium page to get familiar before the games begin.
  • This Is What a Cursed Soccer Stadium Looks Like
    Then the Uruguayan winger Alcides Ghiggia put them over with a sudden strike. “Only three people have, with just one motion, silenced the Maracanã,” he famously said later. “Frank Sinatra, Pope John Paul II and me.”

    The loss, known as the Maracanaço (since used to refer to any loss by the national team on its home field), became a defining moment for Brazilian soccer.

  • Normal Sex, No Acrobatics: The Variety Of Sexual Restrictions Placed On World Cup Players
    Whether players can or should be allowed to have sex around major sporting events is a common and often funny topic, but sex and sexuality have also been bigger issues around Brazil’s World Cup — and not just when it comes to players.
  • Setting screens is not a lost art
    Flat Screens

    A specialty of the San Antonio Spurs, and especially Mr. Tim Duncan. Timmy's robotic consistency can often hypnotize the viewing public, but his screen-setting is an outstanding combination of good fundamentals and some sneaky techniques. The "Flat Screen" is an excellent example of creating a new screening angle that previously didn't exist on a high pick-and-roll.

  • Colin Kaepernick's contract clause could be the future of negotiations
    Kaepernick will have to purchase an insurance policy that would pay out $20 million to the 49ers in the event of a career-ending injury, according to Albert Breer of the NFL Network.

    That's particularly interesting for a number of reasons. For one, those kinds of things are relatively rare under the new collective bargaining agreement, and we didn't hear about them a whole lot before that either (it is possible that these deals were more commonplace and we simply weren't aware). For two, large portions of Kaepernick's contract are fully guaranteed against injury, so if such a thing were to happen at the right time, both parties would get paid -- Kaepernick by the 49ers and the 49ers by the insurance company.

  • Lion Creek restoration
    The ground where I was standing is mapped at about 8 feet elevation. The other end of the park is approximately where the historic coastal marsh started, so they’re doing the right thing for this location. The culvert is still there to handle floods, but a real creek bed evolves to coexist with floods. So what we have now is sort of a zoo creek. I’ll take it over what was there before.
  • Algorithm as Director
    Just like other members of the board, the algorithm gets to vote on whether the firm makes an investment in a specific company or not. The program will be the sixth member of DKV's board.
  • They Hack Because They Can
    We see a great deal of hand-waving and public discussion about the possibility that foreign cyber attackers may one day use vulnerabilities in our critical infrastructure to cause widespread problems in the United States. But my bet is that if this ever happens in a way that causes death and/or significant destruction, it will not be the result of a carefully-planned and executed cyber warfare manifesto, but rather the work of some moderately skilled and bored cracker who discovered that he could do it.
  • NSA: Inside the FIVE-EYED VAMPIRE SQUID of the INTERNET
    Tricking a company like RSA Security into promoting backdoored and sabotaged algorithms for default use in security products is "enabling". Physically sabotaging Cisco routers while they are being shipped out of the US to commercial customers - a serious crime when committed by anyone but the Federal Bureau of Investigation and the NSA - is "enabling".

    Ensuring that communications security encryption chips "used in Virtual Private Networks and Web encryption devices" secretly ship with their security broken open, as specified in the current US "cryptologic capabilities plan", is "enabling". In the coming year, NSA's budget for such Sigint "enabling" is $255m.

  • How I discovered CCS Injection Vulnerability (CVE-2014-0224)
    ChangeCipherSpec MUST be sent at these positions in the handshake. OpenSSL sends CCS in exact timing itself. However, it accepts CCS at other timings when receiving. Attackers can exploit this behavior so that they can decrypt and/or modify data in the communication channel.
  • Early ChangeCipherSpec Attack
    If a ChangeCipherSpec message is injected into the connection after the ServerHello, but before the master secret has been generated, then ssl3_do_change_cipher_spec will generate the keys (2) and the expected Finished hash (3) for the handshake with an empty master secret. This means that both are based only on public information. Additionally, the keys will be latched because of the check at (1) - further ChangeCipherSpec messages will regenerate the expected Finished hash, but not the keys.
  • Why King George III Can Encrypt
    We decided to test whether better metaphors might be able to close this gap between security and usability. Specifically, we wanted metaphors that represented the cryptographic actions a user performs to send secure email and were evocative enough that users could reason about the security properties of PGP without needing to read a lengthy, technical introduction. We settled on four objects: a key, lock, seal and imprint. To send someone a message, secure it with that person’s lock. Only this recipient has the corresponding key, so only they can open it. To prove your identity, stamp the message with your seal. Since everyone knows what your seal’s imprint looks, it’s easy to verify that the message came from you.
  • Why Atom Can’t Replace Vim: Learning the lesson of vi
    Vim, though, is different. Vim only has one command: d, which is “delete.” What does it delete? You name it, literally. The d command gets combined together with those commands for movement: dw deletes to the next word, d$ to the end of the line, dG to the end of the file, and d} to the end of the paragraph.

    This is where Vim’s composability leads to its power. Emacs and Atom don’t have commands for deleting to the end of a file or a paragraph — even when they have commands to move to those places. But in Vim, if you can move to a location, you can delete to that location.

I cannot WAIT to see Michael Bradley, Alejandro Bedoya, Fabian Johnson, Graham Zusi, and Jermaine Jones out on the field together.

Ten more days!

Wednesday, June 4, 2014

Git commit/tag signing

It's interesting to look at the evolution of the git commit signing process.

A decade ago, in reaction to various legal events, Linus Torvalds proposed the adoption of a signing process for changes: Explicitly documenting patch submission:

So what I'm suggesting is that we start "signing off" on patches, to show the path it has come through, and to document that chain of trust. It also allows middle parties to edit the patch without somehow "losing" their names - quite often the patch that reaches the final kernel is not exactly the same as the original one, as it has gone through a few layers of people.

There was more discussion of this over the years, for example this email thread, in which Linus commented:

The thing is, what is it you want to protect? The tree, the authorship, the committer info, the commit log, what?

And it really does matter. Because the signature must be over some part of the commit, and since the SHA1 of the commit by definition contains everything, then the _safest_ thing is always to sign the SHA1 itself: thus a tag.

Anything else is always bound to only sign a _part_ of the commit. What part do you feel like protecting? Or put another way, what part do you feel like _not_ protecting?

So the way git does signatures protects everything. When you do a tag with "git tag -s" on a commit, you can absolutely _know_ that nobody will ever modify that commit in any way without the tag signature becoming invalid.

The process has stabilized, and is now documented in section 12 of How to Get Your Change Into the Linux Kernel, or, Care And Operation Of Your Linus Torvalds, where we read:

The sign-off is a simple line at the end of the explanation for the patch, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch. The rules are pretty simple: if you can certify the below:
        Developer's Certificate of Origin 1.1

        By making a contribution to this project, I certify that:

        (a) The contribution was created in whole or in part by me and I
            have the right to submit it under the open source license
            indicated in the file; or

        (b) The contribution is based upon previous work that, to the best
            of my knowledge, is covered under an appropriate open source
            license and I have the right under that license to submit that
            work with modifications, whether created in whole or in part
            by me, under the same open source license (unless I am
            permitted to submit under a different license), as indicated
            in the file; or

        (c) The contribution was provided directly to me by some other
            person who certified (a), (b) or (c) and I have not modified
            it.

 (d) I understand and agree that this project and the contribution
     are public and that a record of the contribution (including all
     personal information I submit with it, including my sign-off) is
     maintained indefinitely and may be redistributed consistent with
     this project or the open source license(s) involved.

then you just add a line saying

 Signed-off-by: Random J Developer 
using your real name (sorry, no pseudonyms or anonymous contributions.)

If you're studying processes like these, I really recommend you spend some time with Mike Gerwitz's epic mini-novella: A Git Horror Story: Repository Integrity With Signed Commits, at least to the level that you can follow its summary:

Be careful of who you trust. Is your repository safe from harm/exploitation on your PC? What about the PCs of those whom you trust?

Your host is not necessarily secure. Be wary of using remotely hosted repositories as your primary hub.

Using GPG to sign your commits can help to assert your identity, helping to protect your reputation from impostors.

For large merges, you must develop a security practice that works best for your particular project. Specifically, you may choose to sign each individual commit introduced by the merge, sign only the merge commit, or squash all commits and sign the resulting commit.

If you have an existing repository, there is little need to go rewriting history to mass-sign commits.

Once you have determined the security policy best for your project, you may automate signature verification to ensure that no unauthorized commits sneak into your repository.

Sunday, June 1, 2014

The great git workflow discussion

It's been 4.5 years now since Vincent Driessen published his thought-provoking article on git branching workflows: A successful Git branching model.

In a slight conflation of names, Driessen's workflow model has been known by the name of a toolset that he also contributed, which helps implement the model: git-flow.

In the years since, there have been a wealth of variations, elaborations, and alternatives tossed into the ring. Reading them is a fascinating way to keep up with the ongoing debate about how teams work, and about how their tools can help them work.

  • A successful Git branching model
    We consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.

    We consider origin/develop to be the main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.

    When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number. How this is done in detail will be discussed further on.

    Therefore, each time when changes are merged back into master, this is a new production release by definition. We tend to be very strict at this, so that theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers everytime there was a commit on master.

  • Why Aren't You Using git-flow?
    I’m astounded that some people never heard of it before, so in this article I’ll try to tell you why it can make you happy and cheerful all day.
  • git-flow Cheatsheet
    Git-flow is a merge based solution. It doesn't rebase feature branches.
  • Issues with git-flow
    At GitHub, we do not use git-flow. We use, and always have used, a much simpler Git workflow.

    Its simplicity gives it a number of advantages. One is that it’s easy for people to understand, which means they can pick it up quickly and they rarely if ever mess it up or have to undo steps they did wrong. Another is that we don’t need a wrapper script to help enforce it or follow it, so using GUIs and such are not a problem.

  • On DVCS, continuous integration, and feature branches
    The larger point I’m trying to make is this. One of the most important practices that enables early and continuous delivery of valuable software is making sure that your system is always working. The best way for developers to contribute to this goal is by ensuring they minimize the risk that any given change they make to the system will break it. This is achieved by keeping changes small, continuously integrating them into mainline, and making sure there is a comprehensive suite of automated tests to verify that changes behave as expected and don’t introduce any regressions.
  • My Current Java Workflow
    I then setup a Jenkins job called “module-example snapshot”. This checks out any pushes to the develop branch, runs the gradle build task on it (which runs tests and produces artifacts on successful test passes) and then pushes a snapshot release to our in house artifactory server. This means any push to develop will trigger a build that releases a snapshot jar of that module that others could use for their development.
  • GitFlow and Continuous Integration
    So, what does one do with this information? Is use of GitFlow or promiscuous integration a bad idea? I think that it can work very well for some teams and could be very dangerous in others. In general, I like it when the VCS stays out of the way and the team gets in the habit of pushing changes and looking to the CI server validation that everything is ok. Introducing promiscuous integration could interrupt this cycle and allow code changes to circumvention the mainline longer than they should. This branching scheme feels complex, even with the addition of GitFlow.
  • Another Git branching model
    But cheap merging is not enough, you also need to be able to easily pick what to merge. And with Git Flow it’s not easy to remove a feature from a release branch once it’s there. Because a feature branch is started from develop it is bound by its parents commits to other features not yet in production. As a result, if you merge a feature without rebasing you always get more commits than wanted.
  • Branch-per-Feature
    Most of this way of working started from the excellent post called “A Successful Git Branching Model”. The important addition to this process is the idea that you start all features in an iteration from a common point. This would be what you released for the last one. This drives home the granular, atomic, flexible nature that features must exhibit for us to deliver to business in the most effective way. Git flow allows commits to be done on dev branches. This workflow does not allow that.
  • git bugfix branches: choose the root wisely
    The solution I like best involves first finding the commit that introduced the bug, and then branching from there. A command that is invaluable for this is git blame. It is so useful, I would recommend learning it right after commit, checkout, and merge. The resulting repository now looks like Figure 3, where we have found the source of the bug deep inside the development history.
  • Some thoughts on continuous integration and branching management with git
    Given a stable/production branch P, and a set of feature branches, say, FB1, FB2 and FB3, I want a system that:
    • Combines (merge) P with every branch and test it, say P + FB1, P + FB2, P + FB3.
    • Select the successful branches and try to merge them together. Say FB2 failed, so we would try to build and test P + FB1 + FB3 and make it a release candidate.
    • Should a conflict appear, notify the developers so they can fix it.
    • The conflict resolution is saved so not to happen again.
    • The process is repeated continuously.
  • A (Simpler) Successful Git Branching Model
    At my work, we have been using a Git branching strategy based on Vincent Driessen’s successful Git branching model. Over all, the strategy that Vincent proposes is very good and may work perfectly out of the box for many cases. However, since starting to use it I have noticed a few problems as time goes on
  • Two Git Branching Models
    In current projects, we tend to float between two branching models depending on the requirements of the customer / project and the planned deployment process.
  • Git Branching Model
    A workflow for contributions is usually based on topic branches. Instead of committing to the particular version branch directly, a separate branch is made for a particular feature or bugfix where that change can be developed in isolation. When ready, that topic branch is then merged into the version branch.
  • What is Your Branching Model?
    Perforce from the middle 90’s and Subversion from 2001 promoted a trunk model, although neither preclude other branching models. Google have the world biggest Trunk-Based-Development setup, although some teams there are going to say they are closer to Continuous Deployment (below). Facebook are here too.
  • Git Tutorials: Git Workflows
    The array of possible workflows can make it hard to know where to begin when implementing Git in the workplace. This page provides a starting point by surveying the most common Git workflows for enterprise teams.

    As you read through, remember that these workflows are designed to be guidelines rather than concrete rules. We want to show you what’s possible, so you can mix and match aspects from different workflows to suit your individual needs.

  • Git Branching - Branching Workflows
    Now that you have the basics of branching and merging down, what can or should you do with them? In this section, we’ll cover some common workflows that this lightweight branching makes possible, so you can decide if you would like to incorporate it into your own development cycle.

SQLite and Derby

I was interested to stumble across the web document: The Design Of SQLite4.

SQLite4 is an alternative, not a replacement, for SQLite3. SQLite3 is not going away. SQLite3 and SQLite4 will be supported in parallel. The SQLite3 legacy will not be abandoned. SQLite3 will continue to be maintained and improved. But designers of new systems will now have the option to select SQLite4 instead of SQLite3 if desired.

SQLite4 strives to keep the best features of SQLite3 while addressing issues with SQLite3 that can not be fixed without breaking compatibility.

It surprised me to learn that "internally, SQLite3 simply treats that PRIMARY KEY as a UNIQUE constraint. The actual key used for storage in SQLite is the rowid associated with each row." I think it is good that SQLite 4 will amend that choice, and treat PRIMARY KEY more as a DBA would expect it to behave.

I like the fact that, overall, SQLite is continuing to move toward a more standard and "correct" implementation, by doing things like requiring that PRIMARY KEY columns be non-null, and turning foreign key constraints on by default.

Overall, it looks like SQLite is heading in a good direction and I'm pleased to hear that.

The overall goals of SQLite are very similar to Derby, which I am considerably more familiar with.

The Derby community, too, continues to remain active. Here's the plan for the next major Derby release, which will be coming out this summer: 10.11.1 Release Summary:

  • MERGE statement
  • Deferrable constraints
  • WHEN clause in CREATE TRIGGER
  • Rolling log file
  • Experimental Lucene support
  • Simple case expression
  • New SYSCS_UTIL.SYSCS_PEEK_AT_IDENTITY function
  • Use sequence generators to implement identity columns
  • add HoldForConnection ij command to match NoHoldForConnection

Although some of those features are pretty small, a few of them are large, dramatic steps forward (MERGE, CREATE TRIGGER WHEN, deferrable constraints, the Lucene integration)

In my own professional and personal life, I haven't been spending as much time with Derby recently. I no longer write code in Java for 50 hours every week, so it's hard for me to find either time or excuses to be intimately involved with Derby.

However, I try to follow along as best I can, monitoring the email lists, spending time in the Derby communities in places like Stack Overflow, and generally keeping in contact with that team, because there's a superb community of brilliant engineers working on Derby, and I don't want to lose touch with them.

So: way to go, SQLite, and way to go: Derby!