Do you use a smart phone to control things in your house? Thermostats, TVs, maybe the light switches? Just issuing a voice command can turn on the music, turn off the lights, and change the HVAC system from heating to cooling. But they can also create some nasty problems.
These things are connected to the internet, or at least a lot of them are, and unfortunately some manufacturers of internet of things (IoT) devices haven't paid much attention to security. You've probably heard that your Smart TV might be listening to you all the time. In theory, it doesn't send your words anywhere.
Or we have Cortana, Amazon Alexa, Google, or Siri listening in. They all tend to be like a lazy high school student at the end of a warm day during the last week of school in the least appealing subject. The teacher says words, but they aren't being processed unless the student is addressed by name.
I've enabled Cortana, but call on her only when I want the temperature or some other innocuous piece of information. But how do I know Cortana is dozing unless I explicitly call on her? In fact, I don't and if somebody managed to hijack the service, Cortana could be used to silently search the computer for useful information. And, of course, crooks know that they can take over IoT devices and use them to stage huge denial-of-service attacks like the one that happened late last year.
The thing to remember if you're using one of more of these devices is that they're still very new. Cool, yes, but keep an eye on security. As more and more people start using interconnected devices, crooks will be more interested. A minor security problem now could be devastating once criminals learn how to exploit it.
One of the most logical things to do would be to keep IoT devices off your home network. They may need access to the internet, but most of them don't need access to your computers. Fortunately, the people who build routers already make many models with what's called a guest network. Just make sure the guest network is enabled, give it a strong password (just like your main network), and connect the IoT devices to it.
Signals go through one router to the cable modem, but they're separated. Devices on the guest network don't have access to the computers on your main network.
When you're in the market for something, check out its security -- not just its functionality. Amazon Alexa powers the company's Echo speakers and other devices. Instead of waiting for you to download security patches, the smart speakers are smart enough to check in with Amazon's servers occasionally, check for security updates, download the patches, and install them.
Both Samsung and Vizio have been criticized for making televisions that sent speech back for testing and analysis (Samsung) or logged the customer's viewing habits and shared it with advertisers (Vizio).
So it pays to read the company's privacy policies, as boring as they are. What information does the device capture and what does it do with that data? It's one thing if the information stays on the device, something else entirely if it's being sent back to the manufacturer.
Circling back around to your Wi-Fi router: By default it will broadcast its SSID, or service set identifier. You might think it's a good idea to make that something you'll easily recognize -- like your family name. Even though the range is limited, that's not a good idea. Use a cryptic SSID or turn it off entirely. The router doesn't need to broadcast its identity for you to connect to it. You know what the SSID is and you know what that password is. So why broadcast it.
And speaking of passwords: Longer is better
Amazon and Google appliances seek out their own security updates, but it's a good idea to check for yourself occasionally -- and it's essential if the devices you have don't check for their own updates.
Just as we're cautioned to always be aware of our surroundings when in an unfamiliar city, keep an eye on your surroundings in the internet of things. It can be a very strange and unfamiliar city.
Once upon a time, a very long time ago, video editing required a large room full of high-priced equipment. Then, in December of 1991, version 1.0 of Adobe Premiere was released to compete with Avid's Media Composer that had been released a couple of years earlier. Apple's Final Cut Pro and EditDV by Radius (both have been discontinued) were also on the market. These applications changed everything.
Initially available only on Macintosh computers, video editing applications now run on Windows PCs and some functions are available even on smart phones and tablets.
Both film and video can be edited using one of two methods, linear and non-linear. Linear editing is simple and inexpensive. Segments are selected, arranged, and modified in a pre-determined sequence. Unlike film, videotape cannot easily be cut and spliced back together to modify scene order, so linear editing was common with video tape.
Non-linear editing allows random access to segments. The editor can work on any segment at any time and in any order. Original source files are not damaged by editing, so multiple variations of the source files can be tested without the need to store multiple copies of the full video. Flexibility and the ability to undo edits are key advantages.
Version one of Adobe Premiere (it wasn't yet called "Pro") had limited capabilities. There were just three possible sizes for videos: 160x120, 240x180, and 320x240. In those days, though, a TV screen had just 525 vertical lines of resolution and only 483 of those were visible. The rest were in what was called the "vertical blanking interval" of the signal. So the resolution provided by Premiere was nearly TV quality and the prospect of editing video was exciting.
Adobe provided a video that's less than a minute long, but manages to show the visual history of Premiere's first 25 years. Take a look:
Video provided by Adobe.
Premiere and then Premiere Pro adapted over time as features were added to meet the needs of users. In celebration of Premiere's 25th anniversary, Adobe has launched "Make the Cut" – a global editing competition in partnership with the music group Imagine Dragons. The competition gives fans access to a wide range of uncut footage from the official music video of "Believer" by Imagine Dragons. Using these clips and Premiere Pro, amateurs and professionals can cut their own version of the video and the winner receives a $25,000 prize.
Adobe will also award bonus prizes of $1000 each and a year-long Creative Cloud subscription for four special categories:
Projects will be judged by members of the band, Matt Eastin (director and editor of the Believer music video), two-time Academy Award winner Angus Wall (who edited films like Fight Club), music video editor Vinnie Hobbs (who has worked with artists like Kendrick Lamar and Britney Spears), and other notable entertainment industry professionals.
For more information and to download the raw videos, visit Adobe's website. If you don't already have Premiere Pro, you can also download a 30-day trial version from there.
This week the Guardian newspaper published an article by the guy who invented the World Wide Web. Tim Berners-Lee made a proposal for an information management system in March 1989, and he implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the internet in November of that year.
So, as Berners-Lee wrote, "Today marks 28 years since I submitted my original proposal for the worldwide web. I imagined the web as an open platform that would allow everyone, everywhere to share information, access opportunities, and collaborate across geographic and cultural boundaries. In many ways, the web has lived up to this vision, though it has been a recurring battle to keep it open. But over the past 12 months, I've become increasingly worried about three new trends, which I believe we must tackle in order for the web to fulfill its true potential as a tool that serves all of humanity."
Image of Tim Berners-Lee provided by Wikipedia.org.
The full article is well worth reading on the Guardian's website, but here's a brief summary:
This makes sense to me. How about you?
If the headline "Smart Phones Will Read and Write Better Than 32 Million American Adults in Next Decade" doesn't frighten (or at least concern) you, it should. Software like Siri, Alexa, and Cortana are getting better while American literacy rates remain stagnant.
Have you seen the movie "Idiocracy"? A forecast by the University of Massachusetts at Amherst and Project Literacy says that progress in improving human literacy rates has stalled since 2000. This leaves 758 million adults worldwide and 32 million Americans as functionally illiterate. The report predicts that technological advances will soon enable over 2 billion smart phones to read and write. So at the current rate of technological progress, devices and machines powered by artificial intelligence and voice recognition software will surpass the literacy level of one in seven American adults within the next ten years.
The report "2027: Human vs. Machine Literacy" by the global campaign Project Literacy and University of Massachusetts professor Brendan O'Connor calls for society to commit to upgrading its people at the same rate as upgrading its technology, so that by 2030 no child is born at risk of poor literacy.
Consider that machine literacy already exceeds the literacy abilities of 3% of the US population. Today there are more software developers in the US than school teachers and, while we're focusing on making machines smarter, the report says that we are "forgetting that 50% of adults cannot read a book written at an eighth grade level and that 32 million American adults can not currently read a road sign." Yet 10 million self-driving cars are predicted to be on the road by 2020.
This phone can read and write better than several million humans.
A little background might be useful here. Pearson, the British-based creator of textbooks and computer-based training materials, is involved. Pearson says its mission "is to help people make progress through access to better learning. We believe that learning opens up opportunities, creating fulfilling careers and better lives."
But the campaign is also backed by more than 90 partners as diverse as UNESCO, Microsoft, Worldreader, the Clinton Foundation, Room to Read, Doctors of the World, the Hunger Project and ProLiteracy. The presence of the Clinton Foundation probably will make some people think this is a political issue. Regardless of who's involved -- right or left -- literacy seems to be a laudable goal and it's illogical for this to be considered political, but we live in divided times.
University of Massachusetts professor Brendan O'Connor says that machines may be able to read, but that they are not yet able to master "the full nuances of human language and intelligence, despite this idea capturing the imagination of popular culture in movies such as 'Her'." But he says that advances in technology mean that it is likely machines will achieve literacy abilities exceeding those of one in seven Americans within the next decade.
Project Literacy's Kate James says that the report highlights the gulf between technological and human progression. "It is predicted that more than two billion smart phones will soon be capable of reading and writing," she says, "but 758 million people in the world still lack basic literacy skills and this skills gap is being passed on from generation to generation."
Project Literacy commissioned the report to draw attention to what can only be seen as the shocking lack of progress being made in fighting illiteracy.
The report was released in conjunction with the annual SxSW conference in Austin. You can download the full distressing report from the Project Literacy website.