The iPhone, fixed.

Merriam-Webster defines an appliance as:

  1. A piece of equipment for adapting a tool or machine to a special purpose.
  2. An instrument or device designed for a particular use or function.

Wiktionary’s definition is similar:

An implement, an instrument or apparatus designed (or at least used) as a means to a specific end (often specified).

In terms of hardware, the iPhone is nearly perfect. But iOS makes it feel more like an Apple appliance than a flexible and versatile computer.

So I decided to fix it.

Seeing (yourself in VR) is believing

null

I started exploring primitive augmented reality with Google Glass about four years ago. A year later, I began playing with virtual reality using Google Cardboard. And the following year, after watching the Oculus Interstellar Experience at the Air and Space museum (twice — I went back the next day), I ordered the Oculus Developer Kit 2, built a VR-optimized computer, and subsequently spent hours exploring virtual worlds.

It all felt very experimental and forward-looking back then. By the time I finally got the Oculus Rift CV1 (Consumer Version), I had already decided that VR wasn’t quite ready, and that it would be another generation or two before it would really inspire both consumers and content creators. But then something happened that completely shifted my perspective: I picked up the Rift Touch Controllers.

Bringing my hands into VR with me raised the level of immersion to an entirely new level. The experience went from passive observation and consumption to active participation and creation. Suddenly, I was simultaneously controlling time while shattering enemies, climbing breathtaking mountains, and sculpting with light under the stars. Everything about VR I’d been complaining about was easily forgotten, and I was transported. I became convinced — not gradually, but almost instantly — that VR had the potential to become a computing platform every bit a transformative as mobile, if not moreso.

As recent as a year ago, I would have said that tropes like Tony Stark manipulating holographic, virtual objects in 3D space, or the Avatar / Hunger Games / Westworld control rooms, were, at best, aspirational — vaguely feasible permutations of humanity’s non-foreseeable future. Today, I know for a fact that they’re not only achievable, but achievable on a reasonable timescale.

VR still isn’t “ready” yet in the way that mobile was ready in 2007 when Apple introduced the iPhone. VR hardware is expensive, uncomfortable (though it’s gotten much better), and the experience is inherently solitary (at least in physical space). In general, getting in and out of VR is still too big of a commitment for most people to do on a regular basis. But as I wrote in a recent piece on Medium, if we wait for VR to be ready, we will have waited too long.

Prediction: Apple will make monitors again, and the base will be a wireless iPhone charger

thunderbolt_and_iphone

If you’ve ever spent time in front of an Apple Thunderbolt display, then you probably know how tempting it is to rest your phone on the stand. And if you’re paying attention to iPhone rumors, then you know that the next iPhone will likely support wireless (inductive) charging. So why wouldn’t Apple capitalize on their customers’ habits by making a new 5K monitor with wireless charging built into the base?

I know the rumor is that Apple is out of the monitor business, but I don’t buy it. First of all, I can’t imagine Apple being content with allowing customers to spend all day every day staring at a logo that isn’t an Apple. And second, as evidenced by LG’s recent technical problems with their 5K monitors, I also can’t imagine Apple permanently handing over such an important part of the customer experience to third parties who might not execute as well as Apple.

I strongly believe that Apple’s partnership with LG is a stopgap measure — a way to cobble together a reasonable but temporary 5K solution to support the launch of the new MacBook Pro. But once the new iPhones are ready to be announced, I think we will discover that Apple has been working on both brand new 5K monitors, and brand new iMacs, in tandem.

Apple’s new 5K monitors won’t necessarily be about directly generating huge profits for the company. Rather, they will be about contributing to the Apple ecosystem — or to borrow a metaphor from Amazon, adding energy to the Apple flywheel. Just like iPhone customers can augment their overall Apple experience by wearing an Apple Watch, and just like Apple Watches can augment the Mac experience by unlocking computers, and just like Continuity augments the experience of using an iPhone and a Mac together, customers will be able to augment both their MacBook and their iPhone experiences by using a single USB Type-C cable to connect a beautiful, extremely bright, wide-color 5K monitor, and to wirelessly charge their new iPhones at the same time.

When trying to peer into Apple’s future from the outside, it’s important to note that they are in the business of inventing the future. That doesn’t mean inventing disparate devices; it means inventing constellations of devices, and connecting all those dots through seamless, wireless services. When Apple removed the headphone jack from the iPhone 7, they didn’t leave it up to third parties to provide bluetooth audio solutions. They did it themselves with both AirPods and with new Beats products to make sure it was done right (they even augmented bluetooth technology with their own W1 chip). And now that Apple has removed almost all the ports from their MacBooks, I can’t imagine they’re going to leave it entirely up to third parties to build out the USB Type-C ecosystem. When it comes to the experiences that really matter (like your Mac’s primary visual interface), Apple will take it upon themselves to do it right.

While I have your attention, I’ll make a few other bold, unsubstantiated predictions:

  • Other Apple peripherals (mouse, trackpad, keyboard, standalone Touch Bar) will eventually be wirelessly charged, as well.
  • You’ll be able to charge the Apple Watch (along with your peripherals) on your monitor stand.
  • And the big one: the next iPhone won’t be called the iPhone 8. I think Apple is going to reset the version number by calling the next new phone they release the Apple Phone.

Review of Google WiFi

google_wifi_on_desk

About 2,800 square feet over three floors. Usually around 25 devices connected at any given time. Only one wifi router which can’t be moved due to where the cable enters the house.

This is clearly a job for Google Wifi.

google_wifi_devices

In addition to eradicating humanity, Skynet must keep dozens of my devices reliably connected to the internet.

I switched from an Apple AirPort Extreme to a Google OnHub router about a year and a half ago, and while I didn’t like it very much at first, a steady cadence of software upgrades improved it to the point where I was glad I switched. When Google Wifi came out late last year, and I found out that I could use mesh networking to extend the range of my OnHub, I pre-ordered a three-pack right away.

Mesh networking is relatively simple in theory, but in practice, robust mesh networks are complex. Mesh networks obviate the need to have multiple wifi routers with hardwired connections throughout your home or office. Instead, routers create individual, overlapping zones of coverage capable of relaying network traffic through their peers back to a single hardwired connection. Devices connect to whichever node has the best signal and/or the least amount of congestion, and then traffic is routed through the mesh, to your primary wifi point (the one with the actual connection to the internet), and then back again to the device — entirely seamlessly. Not only can mesh networks dramatically extend the range of your network without requiring you to run additional cables (all they need is power), but they are also extremely durable and robust since the network can route around any problematic nodes which may be temporary unavailable due to software updates or malfunctions.

I’ve had Google Wifi set up for a couple of weeks now, and I’d say that I’m generally happy with the results — though with some qualifications. Here are my overall thoughts so far:

Setup was relatively easy, but surprisingly time-consuming. The first thing you do is install the Google WiFi app on your phone (I used a Google Pixel XL), and then you follow a set of simple instructions which generally involve connecting a Google Wifi device to power (reversible USB Type-C!), scanning a QR code on the bottom, and waiting. And then waiting some more. And then, while you’re at it, doing a little waiting.

google_wifi_animation

Setting up Google Wifi means plenty of waiting. But also plenty of cool animations to mesmerize you in the process.

If the process goes smoothly, your patience is rewarded with a fairly painlessly upgraded network. However, the process rarely went smoothly for me. There were software updates, unexplained errors, and worst of all, ambiguous results. (I was told setup didn’t complete properly, yet the device seemed to be functioning. What do I do? Leave it alone and hope for the best? Perform a factory reset and start again? No way to know.)

google_wifi_test_error

When setting up Google Wifi, be prepared for a few bumps along the path toward Wifi Utopia.

Not counting the time it took me to run out to Best Buy so I could replace a surge protector I discovered was blown, it took about an hour from the time I opened the three-pack of Google Wifi devices to the time I had every corner of my home awash in a beautiful overlapping patchwork of 2.4 and 5GHz spectrum. Not too bad.

google_wifi_test_results

A healthy mesh network, powered by Google Wifi. The primary node — in my case, an OnHub — is not pictured in this view.

If the story ended there, my review would have been as glowing as the Google Wifi’s Cylon-like LED diagnostic strip. But sadly, the tale continues.

We have very reliable power where I live, but during a recent and particularly energetic thunderstorm, it flickered a few times. Every device in my house is plugged into a high-quality surge protector and/or a UPS so nothing was damaged, but none of my Google Wifi devices came back up properly. Both my modem and my OnHub wifi router recovered just fine, but all my Google Wifi nodes were pulsating red.

To make a very long story short, using the Google Wifi app to restart them fixed two of the three, but the third — the one closest to the network drop — wouldn’t reconnect. And then, after a factory reset, I kept being told that it couldn’t connect to my network because it was out of range of my primary wifi point (it was not). Acting purely out of instinct, I factory reset all three devices, and re-added them again one-by-one (the one closest to the primary router first). After a great deal of waiting and a few more unexplained errors (mostly failed tests), all three devices were back online, and my mesh network was restored to its former glory.

google_wifi_points

Happy and healthy once again.

For a network that is supposed to be highly durable, I was pretty disappointed that I had to spend about an hour and a half trying to bring it back up after a fairly routine power flicker. And while I feel the quality of my network justifies the time I put into it, I can’t imagine how someone without early-adopter patience would have handled both the initial setup process, and then having to set everything up again a week later. (Actually, I can imagine it, and it looks a lot like several frustrating hours on the phone with support.)

In other words, Google Wifi currently meets my expectations and standards, but it does not pass the “parents” test. (If you buy this for your parents, or recommend they buy it for themselves, be prepared to provide plenty of tech support.)

If you have clear wifi-hypoxic zones in your home, and if you have the patience to deal with a system that clearly still has some bugs to work out, then I definitely recommend that you give Google Wifi a try. I consider $299 (for a three-pack) a reasonable price to pay for sophisticated networking equipment that solves a very real problem without having to run any additional cables throughout your home.

But unless you have a very clear need for something like Google Wifi, I would recommend waiting. Consumer-grade mesh networking is still relatively new, and while $299 isn’t bad ($129 for a single device), as with most new technology, the longer you wait, the cheaper it becomes — but more importantly, the less of your precious time it will demand.

Update (3/27/2017): My experience with customer support, and all connection issues finally resolved.

Continue reading

Nexus 6 Impressions

nexus_6

I’m intentionally avoiding the term “review” because there are already plenty of exhaustive analyses of the Nexus 6 out there (for my two favorites, see MKBHD and The Verge). Instead, I’m just going to cover a handful of elements — both good and bad — that really stood out for me as someone who has owned and actively used every single Nexus and iOS device to date.

Continue reading

Yes, the T-Mobile iPhone 6 from Apple Works on Verizon

iphone_6_on_verizonI’ll get right to the point: the T-Mobile iPhone 6 (and 6 Plus) from Apple (not from a T-Mobile store) works fine on the Verizon network. Just eject the T-Mobile SIM that comes with it, insert your Verizon SIM, and boot. The T-Mobile iPhone from Apple appears to be entirely global, and fully carrier-unlocked, which makes it the best choice for those of us who like to buy phones outside of contracts.

I made this discovery after trying (and failing) for several weeks to buy a new iPhone 6 without a contract — something all parties involved make as difficult as possible. You can’t order a Verizon, AT&T, or Sprint version without a contract; you can’t reserve one online and go into a store to pick it up; Apple won’t sell you a SIM-less, carrier-unlocked iPhone for the first few months after launch; and stores like Best Buy charge you $100 over retail if you want to pay the unsubsidized price (at least the one near my house).

But I eventually gathered enough information (and certainly plenty of misinformation) from enough stores and online sources to feel fairly confident that the unsubsidized T-Mobile iPhone 6 was not only carrier unlocked, but that it would work perfectly fine on Verizon’s CDMA and LTE networks. So I decided to take a chance and order one online.

And it works. I now have an iPhone 6 on Verizon, and a 2014 Moto X on AT&T, which makes me far more connected than anyone possibly needs to be, but allows me to indulge my phone fetish to the greatest extent possible.

Yes, an intervention is probably not far off.

A Watch Enthusiast’s Review of the Samsung Gear Live with Android Wear

samsung_gear_live

Update (12/10/2014): I’ve tried several more Android Wear devices, and both the hardware and the software is getting better. Android Wear has fixed some of the issues I complain about below, and the LG G Watch R and Sony Smartwatch 3 are actually pretty decent devices. (Most people like the Moto 360, but I think a round display should really be round.) All smart watches are still a very long way from being actual watches (as opposed to devices strapped to your wrist), but I’m glad to see how quickly the industry is iterating.

Most of the reviews I’ve seen of the new Android Wear smartwatches have been from device early adopters as opposed to true watch enthusiasts, so I figured I’d provide the perspective of someone who is decidedly both. I’ve always been a gadget fanatic (I keep the latest iPhone and best Android devices on me at all times—as of today, that’s the 5s and the HTC One M8), and as the founder of Watch Report (which I started in 2005, and finally sold last year), I’ve owned and/or reviewed hundreds of watches from Casio to Rolex. Additionally, over the years, I’ve kept a very close eye on the category of smartwatches from MSN Spot watches like the Tissot Hight-T (the best of its long-extinct class), to the Abacus Wrist PDA (never remotely practical, but undeniably fun), to more modern interpretations like the Pebble.

Continue reading

Review of Google Glass

wearing_google_glassThere are plenty of reviews and op-eds out there on Google Glass by now — even plenty from people who have never even worn them — so I’ll make this succinct, and try to cover observations that I haven’t seen elsewhere.

In general, I really like Google Glass. Once you get used to wearing it — once you’ve found a workflow that makes sense for you, and once you’ve come to rely on some of the functionality Google Glass provides — you don’t like being without it. You even become gradually more willing to endure the ogling, inquisitions, and the embarrassment that inevitably comes of wearing them outside the house.

Those who criticize the form factor, battery life, etc. need to keep in mind that the Explorer Edition of Google Glass is a prototype. It’s designed to introduce the concept of wearing a computer on your face, and to help developers, content creators, and consumers learn about wearable computing. Limitations aside (and there are plenty), I think the concept that Google Glass introduces is solid. I don’t know how successful the first consumer version of Glass will be, but I now believe that there are real benefits to accessing information almost effortlessly, and I think the world is probably very close to being ready for the next wave of personal and portable technology beyond the mobile phone.

Some specific thoughts and observations (in no particular order):

  • Navigation is probably my favorite feature. In fact, it might be what made me realize that wearable computing had to become a reality. Glass’s implementation isn’t perfect (it drains the battery quickly and there are still plenty of bugs), but being able to simply glance up and see turn-by-directions in the corner of your vision is extremely effective.
  • The camera button should deactivate when you remove Glass from your face. The camera button is on the top and the USB port is on the bottom which means when you turn Glass over to charge it or to connect it to your computer, you frequently take unintentional upside down pictures (which subsequently get uploaded to Google+). Because of where I usually charge Glass, many of these pictures end up being of my crotch. The photos aren’t actually posted publicly, but you still have to go into your photo stream and clean it up from time to time (and if someone is looking over your shoulder, you might have some uncomfortable explaining to do).
  • Kids absolutely love Glass. I frequently give new technology to my children — or do my best to get feedback from teenagers — since kids usually approach new concepts without the preconceptions of adults. While I had friends who were quick to laugh at and dismiss Google Glass, my experience is that kids think the concept is amazing, and aren’t remotely ashamed or embarrassed to wear them. My older daughter wore Glass for an entire afternoon at an art festival and got significant use out of them. When she wore them out to dinner the other night, our waitress confessed that she couldn’t wait to get a pair herself. If you think that Glass is too nerdy to be successful, consider early perceptions around things like personal computers in general, mobile phones, and Bluetooth headsets. Even wristwatches were once widely thought to be a threat to masculinity. It wasn’t until soldiers started strapping their pocket watches to their wrists because it was much more practical than having to remove them from a pocket that wristwatches started to gain acceptance beyond frivolous feminine accessories.
  • While kids love Glass, the thirty degree activation feature does not work well for them. You can activate Glass by tilting your head up thirty degrees so that you don’t have to reach up and touch the touchpad. It turns out that kids are almost always looking up at roughly thirty degrees which means Glass is constantly activating. I find that when I wear Glass, it activates more frequently than I would like, as well (despite the fact that I’m six feet tall). For instance, they activate during movies, and during the all-important motion of drinking a beer. Rather than lifting your heard thirty degrees to activate Glass, I think a more useful gesture would be to look up with just your eyes — something we do less frequently and that is more deliberate than just lifting your head.
  • One of the most frequent criticisms I’ve read about Glass is that you still need your phone. (I’ve heard this about the latest generation of smart watches, as well.) I have no idea where this complaint comes from. I don’t think the goal of wearable computing — whether it be on your face or your wrist — should be to replace other technologies, just like I don’t think phones or tablets replace personal computers. Technology more frequently adds and augments rather than displaces or replaces. The goal of wearable computing is to create new opportunities and interactions rather than render older ones obsolete. Sometimes new technologies are so obviously superior that the old ones are rapidly abandoned, but it’s more common for us to add additional devices and capabilities to our lives — at least during an initial period of transition.
  • I’m probably stating the obvious here, but future versions of glass will need to be water resistant. Once you get used to having Glass, you don’t want to take them off just because the forecast calls for showers.
  • Glass has a built-in guest mode which is a brilliant idea. Like your phone, Glass becomes a very personal device which you will likely be reluctant to share with everyone who asks to take a peek — especially because it’s so easy to post content publicly. Glass also takes some getting used to so you probably don’t want your friends and family familiarizing themselves with the interaction models while connected to your Google+, Facebook, and Twitter accounts.
  • The one big complaint I have about Glass is the viral messaging that gets appended to texts and emails. All messages have the text “Sent through Glass” appended to them, and as far as I can tell, there’s nothing you can do about it. If Glass were free, I wouldn’t complain. But at $1,500 (actually, well over $1,600 with tax), I don’t feel like I should be forced to do viral marketing on Google’s behalf. The ironic thing is that given the option, I would probably still append some kind of a message as I do on my other devices in order to provide some context for brevity, auto-correct errors, etc. However, I really hate not being given the option to at the very least customize the message.

Update (5/26/2013): After receiving a great deal of feedback and several questions, I decided to add a few additional points:

  • In my opinion, the camera is not the killer feature. Photos are difficult to compose with Glass, you can’t edit them like you can on a phone, and sharing is much more difficult and limited. (This final point could partially be addressed in software, but it still wouldn’t be as easy as a mobile phone.) The one advantage Glass has over a camera in a phone is that it is more available and accessible. However, that is also perhaps one of Glass’s biggest barriers to adoption. Having a camera constantly pointed at the world prompts all kinds of privacy concerns which bloggers, journalists, and even Congress will not tire of debating anytime soon. Like most things in life, technology is a tradeoff: You measure the good against the bad, and in the end, decide if something is a net gain or a net loss. At least right now, I believe that the camera in Glass is a net loss. The ability to take a picture a few seconds faster than you could with your phone is nice, but it’s probably not worth the accompanying privacy issues (it’s only a matter of time before Glass is banned from my gym, for instance), the additional bulk and weight of the device, and the fact that most pictures simply aren’t all that good. Having gained some experience with Glass now, if I had the option of buying a smaller, lighter, and less threatening version without a camera, I think I probably would.
  • Another issue that Glass has brought to the forefront is the matter of distraction. Distraction is no worse with Glass than any other device. Glass never displays notifications unexpectedly (with the exception of the display activating when you tilt your head up, but that can be disabled) and therefore it is no more distracting than a mobile phone. In fact, because you can glance at a map or a message even faster, I would argue that Glass is possibly less distracting than a phone. That is not to say Glass is distraction-free, however. Not even close. I don’t believe that technologies like heads-up displays or hands-free calls eliminate or even reduce distraction since the lack of attention is far more dangerous than the simple mechanics and logistics of interacting with a device. That said, when used responsibly, Glass should not be any more dangerous than your phone. (Note that I am not dismissing the dangers of phones — I’m just claiming that Glass is no worse.)
  • Related to distraction is the question of presence. I’ve noticed that a significant number of people are offended by Glass because they feel like it represents just one more device to come between people, to distract us from the current time and place, and to devalue human interaction. I’m undecided on this point. I personally choose to be discreet with my devices; unless I’m specifically monitoring something (usually work-related), I rarely pull out my phone in social situations, and I always give someone standing in front of me priority over whatever might be happening on my phone, watch, or Google Glass. That said, I don’t feel the need to impose my personal philosophies on others. I think it remains to be seen exactly what the repercussions are of integrating all these data streams into our consciousness. I know there are studies which are not optimistic, but devices like phones and Glass remind me a great deal of video games. When I was a kid, the conventional wisdom was that video games would “melt your brain” (and there are myriad studies that claim to back that up). I’m sure I’ve heard that phrase — or some variation thereof — dozens of times throughout my life. However, I’ve been gaming pretty consistently from the time I got my first hand-held LED football game, my first PC, and my first console (Intellivision), and I believe that video games have enriched my life in a number of ways. I believe that responsible and conscientious integration of technology into one’s life can be very positive and empowering. However, I also acknowledge that it can have detrimental effects on relationships and quality of life when not moderated. For the most part, I don’t find discussions of whether certain technologies are “good” or “bad” to be productive. I think it’s up to us individually to find where and how technology fits into our lives — to embrace it where it works, and reject or modify it where it does not. I see technology as a massive, never-ending human experiment, and we shouldn’t be afraid to try new things, and to make plenty of mistakes and adjustments along the way. And at least in instances where people’s lives are not at stake, I think we should be patient with those around us who are trying to figure it out for themselves.

Although I really love Google Glass, I don’t love it because it’s perfect, or because I think Google got it exactly right. I love Glass because it is an early attempt at practical wearable computing, and I think it proves that wearable computing is going to happen. Whether it happens now or in the future is hard to say. The world was not ready for digital books on a large scale until Amazon introduced the Kindle, or tablet computers until Apple introduced the iPad, so it’s hard to say whether Google is ahead of its time, or whether the Glass team is successfully creating the environment they need to drive mainstream adoption. Whether it happens now or in the near future, there’s little doubt in my mind that it will eventually happen.

Whenever I question whether a new technology will be successful, I think back to a conversation I had with a friend of mine after the original iPhone was introduced. By any definition, he was and still is an Apple fanboy, but he had no interested in the iPhone because it wasn’t very suitable to single-handed use. He was too focused on the drawbacks, and not focused enough on how the positives would outweigh the negatives. Today, he’s a big iPhone fan, and I’m sure couldn’t imagine his life — or probably even a single day, for that matter — without an iPhone in his pocket.

So to all the nonbelievers, get your jeering and finger pointing out of the way now because it may not be very long before you will be wearing something like Google Glass yourself.

The Ultimate Irony of Climate Change: Before We Created It, It Created Us

human_brain_size

The picture above was taken at one of the best exhibits I’ve ever seen in any museum: the Hall of Human Origins at the Smithsonian National Museum of Natural History. The chart in the lower right-hand corner shows the correlation between brain size in humans and drastic changes in climate with an emphasis on the period between 800,000 and 200,000 years ago. (A nicer version of this chart is available on the exhibit’s website.)

It makes perfect sense that greater intelligence (as evidenced by larger brains) proved advantageous during times of unpredictable weather since the more humans were able to plan ahead, communicate, and work collaboratively, the more likely they were to survive. In fact, I’ve even read that the cranial capacity of fossilized skulls gets larger the further away from the equator they occurred, suggesting a correlation between larger brains and harsher weather. In other words, in terms of natural selection, everything here appears to be in perfect working order.

But while there are no surprises in the relationship between brain size and climate change, there certainly is plenty of irony. The eventual result of all of that hard-fought intelligence were both the agricultural and industrial revolutions — precisely the technological advances that are most closely associated with modern climate change. Therefore, one could theorize that surviving rapid climate change bestowed upon humanity just enough intelligence to create even more rapid and dangerous climate change. One might even go so far as to say that the human brain is attempting to self-perpetuate continued growth.

I’ve read conflicting predictions of how this latest wave of climate change will ultimately affect brains size. Since equatorial temperatures will continue to expand latitudinally, it’s possible that the human brain could suddenly stop growing; on the other hand, due to all the challenges humanity faces as a result of rapid climate change, the size of our brains could continue to grow — perhaps at an even faster pace. Personally, I’m hoping for a future where we learn to use technology, intelligence, and even a little empathy to finally take control of our own evolutionary paths. Although it’s a little late for me to be genetically engineered, I wouldn’t mind a few multi-core petaflop processors embedded in my brain and at least one robotic arm.

The Miniaturization of Warfare

wmd_world_map

Growing up in the 80s, we were taught to fear a nuclear attack by the Soviet Union. Today, I think it’s fair to say that most people believe cyberwarfare is probably a greater threat than a full-scale nuclear holocaust.

What many people don’t fully grasp about nuclear weapons (in particular, those who object to reducing our stockpiles) is that they constitute a tremendous expense without all that much benefit — primarily due to the fact that governments can’t actually use them. Whereas the U.S. currently deploys conventional weapons on a weekly and sometimes daily basis, it’s very difficult to imagine a scenario where the United States could justify launching a nuclear attack of even the smallest scale.

This concept is critical to the plot of my story The Epoch Index, and is probably best described by the following passage:

After centuries-old rivalries finally escalated into full-scale nuclear conflicts, the United Nations drafted and unanimously voted into effect a resolution unequivocally banning any sized nuclear arsenal anywhere on the planet. The U.S. and other early nuclear adopters were happy to back (and help enforce) the new international law, having long ago anticipated the nuclear backlash and invested heavily in Prompt Global Strike systems: networks of launch vehicles and hypersonic cruise missiles designed to deliver warheads filled with scored tungsten rods twice as strong as steel and capable of ripping any structure anywhere on Earth to shreds in less time than it takes to have a pizza delivered. Thermonuclear hydrogen bombs were old news, as far as most world powers were concerned. The only reason to unleash 50 megatons of destruction is if you have very little faith in the accuracy of your delivery mechanisms. Modern weaponry can target down to the square centimeter, and since it uses real time topographical guidance, it can do so even when your entire GPS satellite network is compromised. Besides, what’s the point of defeating another nation if your great grandchildren can’t even set foot in it, and just about everything worth looting, pillaging, or oppressing is either incinerated or radioactive? Nuclear weapons are clumsy and inelegant. High-tech conventional is the new thermonuclear. Modern militaries say less is more.

In my upcoming novel Kingmaker, drones are a central theme:

It wasn’t special operations teams that concerned him; he was confident he could see a takedown coming in plenty of time, and even if he didn’t, he probably stood as good a chance of walking away from a team of Navy Seals as any one of the Seals themselves. What Alexei feared was death from above. With a well coordinated drone strike, you were simply there one moment, and everywhere but there the next. It didn’t matter how quick you were, or how smart, or how well trained. If you were on the CIA’s radar, they knew how to get you off of it and still be home in time for dinner and to kiss the kids goodnight. All it cost them was barely an hour’s worth of classified paperwork that everyone already knew would never see the inside of either a civilian or military courtroom.

As a deterrent, maintaining a nuclear arsenal equal to (or slightly greater than) those of one’s rivals still makes some strategic sense, however the reality is that weapons which can be relatively inexpensively and surreptitiously deployed are far more menacing than weapons that everyone knows you cannot actually use. In other words, the world has much more to fear from weapons that can — without due process — target buildings, vehicles, and even individuals than indiscriminate warheads that can destroy entire cities.

Just as in the world of technology, we are now witnessing the miniaturization of warfare.