Nexus 6 Impressions

nexus_6

I’m intentionally avoiding the term “review” because there are already plenty of exhaustive analyses of the Nexus 6 out there (for my two favorites, see MKBHD and The Verge). Instead, I’m just going to cover a handful of elements — both good and bad — that really stood out for me as someone who has owned and actively used every single Nexus and iOS device to date.

Continue reading

Yes, the T-Mobile iPhone 6 from Apple Works on Verizon

iphone_6_on_verizonI’ll get right to the point: the T-Mobile iPhone 6 (and 6 Plus) from Apple (not from a T-Mobile store) works fine on the Verizon network. Just eject the T-Mobile SIM that comes with it, insert your Verizon SIM, and boot. The T-Mobile iPhone from Apple appears to be entirely global, and fully carrier-unlocked, which makes it the best choice for those of us who like to buy phones outside of contracts.

I made this discovery after trying (and failing) for several weeks to buy a new iPhone 6 without a contract — something all parties involved make as difficult as possible. You can’t order a Verizon, AT&T, or Sprint version without a contract; you can’t reserve one online and go into a store to pick it up; Apple won’t sell you a SIM-less, carrier-unlocked iPhone for the first few months after launch; and stores like Best Buy charge you $100 over retail if you want to pay the unsubsidized price (at least the one near my house).

But I eventually gathered enough information (and certainly plenty of misinformation) from enough stores and online sources to feel fairly confident that the unsubsidized T-Mobile iPhone 6 was not only carrier unlocked, but that it would work perfectly fine on Verizon’s CDMA and LTE networks. So I decided to take a chance and order one online.

And it works. I now have an iPhone 6 on Verizon, and a 2014 Moto X on AT&T, which makes me far more connected than anyone possibly needs to be, but allows me to indulge my phone fetish to the greatest extent possible.

Yes, an intervention is probably not far off.

A Watch Enthusiast’s Review of the Samsung Gear Live with Android Wear

samsung_gear_live

Update (12/10/2014): I’ve tried several more Android Wear devices, and both the hardware and the software is getting better. Android Wear has fixed some of the issues I complain about below, and the LG G Watch R and Sony Smartwatch 3 are actually pretty decent devices. (Most people like the Moto 360, but I think a round display should really be round.) All smart watches are still a very long way from being actual watches (as opposed to devices strapped to your wrist), but I’m glad to see how quickly the industry is iterating.

Most of the reviews I’ve seen of the new Android Wear smartwatches have been from device early adopters as opposed to true watch enthusiasts, so I figured I’d provide the perspective of someone who is decidedly both. I’ve always been a gadget fanatic (I keep the latest iPhone and best Android devices on me at all times—as of today, that’s the 5s and the HTC One M8), and as the founder of Watch Report (which I started in 2005, and finally sold last year), I’ve owned and/or reviewed hundreds of watches from Casio to Rolex. Additionally, over the years, I’ve kept a very close eye on the category of smartwatches from MSN Spot watches like the Tissot Hight-T (the best of its long-extinct class), to the Abacus Wrist PDA (never remotely practical, but undeniably fun), to more modern interpretations like the Pebble.

Continue reading

Review of Google Glass

wearing_google_glassThere are plenty of reviews and op-eds out there on Google Glass by now — even plenty from people who have never even worn them — so I’ll make this succinct, and try to cover observations that I haven’t seen elsewhere.

In general, I really like Google Glass. Once you get used to wearing it — once you’ve found a workflow that makes sense for you, and once you’ve come to rely on some of the functionality Google Glass provides — you don’t like being without it. You even become gradually more willing to endure the ogling, inquisitions, and the embarrassment that inevitably comes of wearing them outside the house.

Those who criticize the form factor, battery life, etc. need to keep in mind that the Explorer Edition of Google Glass is a prototype. It’s designed to introduce the concept of wearing a computer on your face, and to help developers, content creators, and consumers learn about wearable computing. Limitations aside (and there are plenty), I think the concept that Google Glass introduces is solid. I don’t know how successful the first consumer version of Glass will be, but I now believe that there are real benefits to accessing information almost effortlessly, and I think the world is probably very close to being ready for the next wave of personal and portable technology beyond the mobile phone.

Some specific thoughts and observations (in no particular order):

  • Navigation is probably my favorite feature. In fact, it might be what made me realize that wearable computing had to become a reality. Glass’s implementation isn’t perfect (it drains the battery quickly and there are still plenty of bugs), but being able to simply glance up and see turn-by-directions in the corner of your vision is extremely effective.
  • The camera button should deactivate when you remove Glass from your face. The camera button is on the top and the USB port is on the bottom which means when you turn Glass over to charge it or to connect it to your computer, you frequently take unintentional upside down pictures (which subsequently get uploaded to Google+). Because of where I usually charge Glass, many of these pictures end up being of my crotch. The photos aren’t actually posted publicly, but you still have to go into your photo stream and clean it up from time to time (and if someone is looking over your shoulder, you might have some uncomfortable explaining to do).
  • Kids absolutely love Glass. I frequently give new technology to my children — or do my best to get feedback from teenagers — since kids usually approach new concepts without the preconceptions of adults. While I had friends who were quick to laugh at and dismiss Google Glass, my experience is that kids think the concept is amazing, and aren’t remotely ashamed or embarrassed to wear them. My older daughter wore Glass for an entire afternoon at an art festival and got significant use out of them. When she wore them out to dinner the other night, our waitress confessed that she couldn’t wait to get a pair herself. If you think that Glass is too nerdy to be successful, consider early perceptions around things like personal computers in general, mobile phones, and Bluetooth headsets. Even wristwatches were once widely thought to be a threat to masculinity. It wasn’t until soldiers started strapping their pocket watches to their wrists because it was much more practical than having to remove them from a pocket that wristwatches started to gain acceptance beyond frivolous feminine accessories.
  • While kids love Glass, the thirty degree activation feature does not work well for them. You can activate Glass by tilting your head up thirty degrees so that you don’t have to reach up and touch the touchpad. It turns out that kids are almost always looking up at roughly thirty degrees which means Glass is constantly activating. I find that when I wear Glass, it activates more frequently than I would like, as well (despite the fact that I’m six feet tall). For instance, they activate during movies, and during the all-important motion of drinking a beer. Rather than lifting your heard thirty degrees to activate Glass, I think a more useful gesture would be to look up with just your eyes — something we do less frequently and that is more deliberate than just lifting your head.
  • One of the most frequent criticisms I’ve read about Glass is that you still need your phone. (I’ve heard this about the latest generation of smart watches, as well.) I have no idea where this complaint comes from. I don’t think the goal of wearable computing — whether it be on your face or your wrist — should be to replace other technologies, just like I don’t think phones or tablets replace personal computers. Technology more frequently adds and augments rather than displaces or replaces. The goal of wearable computing is to create new opportunities and interactions rather than render older ones obsolete. Sometimes new technologies are so obviously superior that the old ones are rapidly abandoned, but it’s more common for us to add additional devices and capabilities to our lives — at least during an initial period of transition.
  • I’m probably stating the obvious here, but future versions of glass will need to be water resistant. Once you get used to having Glass, you don’t want to take them off just because the forecast calls for showers.
  • Glass has a built-in guest mode which is a brilliant idea. Like your phone, Glass becomes a very personal device which you will likely be reluctant to share with everyone who asks to take a peek — especially because it’s so easy to post content publicly. Glass also takes some getting used to so you probably don’t want your friends and family familiarizing themselves with the interaction models while connected to your Google+, Facebook, and Twitter accounts.
  • The one big complaint I have about Glass is the viral messaging that gets appended to texts and emails. All messages have the text “Sent through Glass” appended to them, and as far as I can tell, there’s nothing you can do about it. If Glass were free, I wouldn’t complain. But at $1,500 (actually, well over $1,600 with tax), I don’t feel like I should be forced to do viral marketing on Google’s behalf. The ironic thing is that given the option, I would probably still append some kind of a message as I do on my other devices in order to provide some context for brevity, auto-correct errors, etc. However, I really hate not being given the option to at the very least customize the message.

Update (5/26/2013): After receiving a great deal of feedback and several questions, I decided to add a few additional points:

  • In my opinion, the camera is not the killer feature. Photos are difficult to compose with Glass, you can’t edit them like you can on a phone, and sharing is much more difficult and limited. (This final point could partially be addressed in software, but it still wouldn’t be as easy as a mobile phone.) The one advantage Glass has over a camera in a phone is that it is more available and accessible. However, that is also perhaps one of Glass’s biggest barriers to adoption. Having a camera constantly pointed at the world prompts all kinds of privacy concerns which bloggers, journalists, and even Congress will not tire of debating anytime soon. Like most things in life, technology is a tradeoff: You measure the good against the bad, and in the end, decide if something is a net gain or a net loss. At least right now, I believe that the camera in Glass is a net loss. The ability to take a picture a few seconds faster than you could with your phone is nice, but it’s probably not worth the accompanying privacy issues (it’s only a matter of time before Glass is banned from my gym, for instance), the additional bulk and weight of the device, and the fact that most pictures simply aren’t all that good. Having gained some experience with Glass now, if I had the option of buying a smaller, lighter, and less threatening version without a camera, I think I probably would.
  • Another issue that Glass has brought to the forefront is the matter of distraction. Distraction is no worse with Glass than any other device. Glass never displays notifications unexpectedly (with the exception of the display activating when you tilt your head up, but that can be disabled) and therefore it is no more distracting than a mobile phone. In fact, because you can glance at a map or a message even faster, I would argue that Glass is possibly less distracting than a phone. That is not to say Glass is distraction-free, however. Not even close. I don’t believe that technologies like heads-up displays or hands-free calls eliminate or even reduce distraction since the lack of attention is far more dangerous than the simple mechanics and logistics of interacting with a device. That said, when used responsibly, Glass should not be any more dangerous than your phone. (Note that I am not dismissing the dangers of phones — I’m just claiming that Glass is no worse.)
  • Related to distraction is the question of presence. I’ve noticed that a significant number of people are offended by Glass because they feel like it represents just one more device to come between people, to distract us from the current time and place, and to devalue human interaction. I’m undecided on this point. I personally choose to be discreet with my devices; unless I’m specifically monitoring something (usually work-related), I rarely pull out my phone in social situations, and I always give someone standing in front of me priority over whatever might be happening on my phone, watch, or Google Glass. That said, I don’t feel the need to impose my personal philosophies on others. I think it remains to be seen exactly what the repercussions are of integrating all these data streams into our consciousness. I know there are studies which are not optimistic, but devices like phones and Glass remind me a great deal of video games. When I was a kid, the conventional wisdom was that video games would “melt your brain” (and there are myriad studies that claim to back that up). I’m sure I’ve heard that phrase — or some variation thereof — dozens of times throughout my life. However, I’ve been gaming pretty consistently from the time I got my first hand-held LED football game, my first PC, and my first console (Intellivision), and I believe that video games have enriched my life in a number of ways. I believe that responsible and conscientious integration of technology into one’s life can be very positive and empowering. However, I also acknowledge that it can have detrimental effects on relationships and quality of life when not moderated. For the most part, I don’t find discussions of whether certain technologies are “good” or “bad” to be productive. I think it’s up to us individually to find where and how technology fits into our lives — to embrace it where it works, and reject or modify it where it does not. I see technology as a massive, never-ending human experiment, and we shouldn’t be afraid to try new things, and to make plenty of mistakes and adjustments along the way. And at least in instances where people’s lives are not at stake, I think we should be patient with those around us who are trying to figure it out for themselves.

Although I really love Google Glass, I don’t love it because it’s perfect, or because I think Google got it exactly right. I love Glass because it is an early attempt at practical wearable computing, and I think it proves that wearable computing is going to happen. Whether it happens now or in the future is hard to say. The world was not ready for digital books on a large scale until Amazon introduced the Kindle, or tablet computers until Apple introduced the iPad, so it’s hard to say whether Google is ahead of its time, or whether the Glass team is successfully creating the environment they need to drive mainstream adoption. Whether it happens now or in the near future, there’s little doubt in my mind that it will eventually happen.

Whenever I question whether a new technology will be successful, I think back to a conversation I had with a friend of mine after the original iPhone was introduced. By any definition, he was and still is an Apple fanboy, but he had no interested in the iPhone because it wasn’t very suitable to single-handed use. He was too focused on the drawbacks, and not focused enough on how the positives would outweigh the negatives. Today, he’s a big iPhone fan, and I’m sure couldn’t imagine his life — or probably even a single day, for that matter — without an iPhone in his pocket.

So to all the nonbelievers, get your jeering and finger pointing out of the way now because it may not be very long before you will be wearing something like Google Glass yourself.

The Ultimate Irony of Climate Change: Before We Created It, It Created Us

human_brain_size

The picture above was taken at one of the best exhibits I’ve ever seen in any museum: the Hall of Human Origins at the Smithsonian National Museum of Natural History. The chart in the lower right-hand corner shows the correlation between brain size in humans and drastic changes in climate with an emphasis on the period between 800,000 and 200,000 years ago. (A nicer version of this chart is available on the exhibit’s website.)

It makes perfect sense that greater intelligence (as evidenced by larger brains) proved advantageous during times of unpredictable weather since the more humans were able to plan ahead, communicate, and work collaboratively, the more likely they were to survive. In fact, I’ve even read that the cranial capacity of fossilized skulls gets larger the further away from the equator they occurred, suggesting a correlation between larger brains and harsher weather. In other words, in terms of natural selection, everything here appears to be in perfect working order.

But while there are no surprises in the relationship between brain size and climate change, there certainly is plenty of irony. The eventual result of all of that hard-fought intelligence were both the agricultural and industrial revolutions — precisely the technological advances that are most closely associated with modern climate change. Therefore, one could theorize that surviving rapid climate change bestowed upon humanity just enough intelligence to create even more rapid and dangerous climate change. One might even go so far as to say that the human brain is attempting to self-perpetuate continued growth.

I’ve read conflicting predictions of how this latest wave of climate change will ultimately affect brains size. Since equatorial temperatures will continue to expand latitudinally, it’s possible that the human brain could suddenly stop growing; on the other hand, due to all the challenges humanity faces as a result of rapid climate change, the size of our brains could continue to grow — perhaps at an even faster pace. Personally, I’m hoping for a future where we learn to use technology, intelligence, and even a little empathy to finally take control of our own evolutionary paths. Although it’s a little late for me to be genetically engineered, I wouldn’t mind a few multi-core petaflop processors embedded in my brain and at least one robotic arm.

The Miniaturization of Warfare

wmd_world_map

Growing up in the 80s, we were taught to fear a nuclear attack by the Soviet Union. Today, I think it’s fair to say that most people believe cyberwarfare is probably a greater threat than a full-scale nuclear holocaust.

What many people don’t fully grasp about nuclear weapons (in particular, those who object to reducing our stockpiles) is that they constitute a tremendous expense without all that much benefit — primarily due to the fact that governments can’t actually use them. Whereas the U.S. currently deploys conventional weapons on a weekly and sometimes daily basis, it’s very difficult to imagine a scenario where the United States could justify launching a nuclear attack of even the smallest scale.

This concept is critical to the plot of my story The Epoch Index, and is probably best described by the following passage:

After centuries-old rivalries finally escalated into full-scale nuclear conflicts, the United Nations drafted and unanimously voted into effect a resolution unequivocally banning any sized nuclear arsenal anywhere on the planet. The U.S. and other early nuclear adopters were happy to back (and help enforce) the new international law, having long ago anticipated the nuclear backlash and invested heavily in Prompt Global Strike systems: networks of launch vehicles and hypersonic cruise missiles designed to deliver warheads filled with scored tungsten rods twice as strong as steel and capable of ripping any structure anywhere on Earth to shreds in less time than it takes to have a pizza delivered. Thermonuclear hydrogen bombs were old news, as far as most world powers were concerned. The only reason to unleash 50 megatons of destruction is if you have very little faith in the accuracy of your delivery mechanisms. Modern weaponry can target down to the square centimeter, and since it uses real time topographical guidance, it can do so even when your entire GPS satellite network is compromised. Besides, what’s the point of defeating another nation if your great grandchildren can’t even set foot in it, and just about everything worth looting, pillaging, or oppressing is either incinerated or radioactive? Nuclear weapons are clumsy and inelegant. High-tech conventional is the new thermonuclear. Modern militaries say less is more.

In my upcoming novel Kingmaker, drones are a central theme:

It wasn’t special operations teams that concerned him; he was confident he could see a takedown coming in plenty of time, and even if he didn’t, he probably stood as good a chance of walking away from a team of Navy Seals as any one of the Seals themselves. What Alexei feared was death from above. With a well coordinated drone strike, you were simply there one moment, and everywhere but there the next. It didn’t matter how quick you were, or how smart, or how well trained. If you were on the CIA’s radar, they knew how to get you off of it and still be home in time for dinner and to kiss the kids goodnight. All it cost them was barely an hour’s worth of classified paperwork that everyone already knew would never see the inside of either a civilian or military courtroom.

As a deterrent, maintaining a nuclear arsenal equal to (or slightly greater than) those of one’s rivals still makes some strategic sense, however the reality is that weapons which can be relatively inexpensively and surreptitiously deployed are far more menacing than weapons that everyone knows you cannot actually use. In other words, the world has much more to fear from weapons that can — without due process — target buildings, vehicles, and even individuals than indiscriminate warheads that can destroy entire cities.

Just as in the world of technology, we are now witnessing the miniaturization of warfare.

How the Chrome Dev Tools Got Me an Awesome License Plate

nbsp_license_plate

One of my favorite places in the world is the Udvar-Hazy Air and Space Museum (which is only about 15 minutes from my house), so when I saw that I could help support the Smithsonian with a custom license plate, I figured I’d give it a go. While I was at it, I decided to see if I could figure something out that would also symbolize one of my other passions: web development. It occurred to me that the perfect way to bring them both together would be the tag “&nbsp” which is the HTML entity code for “space” (technically, it’s “ ” but you can’t get a semicolon on a license plate, and most browsers don’t require it, anyway).

When I checked the plate online, I was both pleased and surprised to find that it was available, but after I started the registration and purchase process, I found out why. The DMV web application does not escape user input, so the character sequence “&nbsp” is always displayed as a literal space. I hoped I might still get away with it, however when I tried to submit the order confirmation form, I got a server-side error message explaining that the plate ” ” (empty space) was invalid.

Being the determined hacker that I am, I initially saved the source from the confirmation page, fixed the error by turning “&nbsp” into “&nbsp” (the character code for ampersand followed by “nbsp” — the proper way to escape user input in this case), and started working on tricking the DMV’s servers into believing that the form I was submitting actually came from them. But then it occurred to me that I could simply fix the DMV’s mistake using the WebKit Web Inspector. I opened up the awesome Chrome Dev Tools, made the change in the live page, and the form submitted perfectly. About two weeks later, my brand new plates arrived.

Thanks to the WebKit Web Inspector, the Chrome Dev Tools, and the openness and transparency of the web, I’m now rolling through Northern Virginia representing all my space-enthusiast and web-developer homies.

Macro Photographs of a MacBook Pro Retina Display

The other day, I noticed my Canon 7D with a 100mm macro lens on it sitting right beside my MacBook Pro with a Retina display, so I decided to see what 220 pixels per inch looks like blown up. The photographs below compare the same icons and text on a Retina display versus the display on an 11″ MacBook Air.

Click on any of the images to see it at twice the size (note that the images are 1,000 pixels wide and 220 PPI, so they look awesome on a retina display, but they may also take a few seconds to load).

text_air_large

Text on an 11″ MacBook Air.

text_retina_large

Text on a MBP Retina. Much sharper.

mail_air_large

The Mail.app icon on a standard display.

mail_retina_large

The Mail.app icon on a retina display. If your monitor is clean and you look really closely, you can see a few dead pixels.

wing_air_large

A close-up of the Mail.app icon on a standard display.

wing_retina_large

A close-up of the Mail.app icon on a retina display.

menubar_large

The menu bar on a retina display. Notice how the updated icons look great, and those that haven’t been updated yet look like crap. Unfortunately, this is what most of the internet looks like (with the exception of text, which looks great).

burnin_large

The dreaded ghosting issue. You can also see several pixels misbehaving in this photo (top center).

twitter_large

The one curious exception to text looking almost universally better on the retina display is the Twitter application. For some reason, the text looks as bad as the scaled-up profile pictures.

mac_stack_1_large

mac_stack_2_large

mac_stack_3_large

Genetic Data Storage Technology From Containment Becomes a Reality

dna

In my novel Containment, I write about a computer scientist (Arik) and a biologist (Cadie) who work together on a project to use human DNA as a general data storage medium. They call the project ODSTAR for Organic Data Storage and Retrieval, and the first big piece of data they store and successfully retrieve is an image of earth known as The Blue Marble (one of the most famous photographs in history taken by the crew of Apollo 17). Their ODSTAR technology eventually gets used to store critical research which they discover can actually get passed down to future generations.

As was the case with artificial photosynthesis and the proposal to use light pollution from distant worlds to detect the existence of extraterrestrials, technology proposed in Containment has again become a reality. Researchers at Harvard University encoded a 53,426-word book into DNA and then decoded it again with an error rate of only ten bits total.

If you have a subscription to the journal Science, you can read the paper here. Otherwise, you can find more details on Mashable. And, of course, you can find Containment on Amazon.