On Tuesday, Google made client-side encryption available to a limited set of Gmail and Calendar users in a move designed to give them more control over who sees sensitive communications and schedules.
Client-side encryption is a generic term for any sort of encryption that’s applied to data before it’s sent from a user device to a server. With server-side encryption, by contrast, the client device sends the data to a central server, which then uses keys in its possession to encrypt it while it’s stored. This is what Google does today. (To be clear, the data is sent encrypted through HTTPS, but it's decrypted as soon as Google receives it.)
Google’s client-side encryption occupies a middle ground between the two. Data is encrypted on the client device before being sent (by HTTPS) to Google. The data can only be decrypted on an endpoint machine with the same key used by the sender. This provides an incremental benefit since the data will remain unreadable to any malicious Google insiders or hackers who manage to compromise Google servers.
A major step in restructuring IEEE’s regions was taken during the November IEEE Board of Directors meeting, in which the Board approved the proposed region realignment as outlined in The Institute’s September article “IEEE Is Working to Reconfigure Its Geographic Regions”.
IEEE has been working to restructure its 10 geographic regions to provide a more equitable representation across its global membership.
Some of the factors that were considered when evaluating the realignment were membership counts, geographic locations, time zones, and providing the best overall experience for members.
During the IEEE Board meeting, the vice president of Member and Geographic Activities, David Koehler, presented MGA’s progress and the proposed region realignment.
The plan’s approval is the first of many steps. It allows for the consolidation of the six U.S.-based regions into five, joining together current IEEE Region 1 (Northeastern U.S.) and Region 2 (Eastern U.S.) and the splitting of current IEEE Region 10 (Asia and Pacific) into two regions.
Those regional changes will be effective in January 2028.
In addition, the Board formally approved the concept of zones and zone representatives. A zone is a substructure within a region with a significant number of members. Four zones were approved in November: two in Region 8 (Africa, Europe, Middle East) and two in Region 10, which became effective in January
In those larger regions, zone representatives can assist in the region and provide an additional voice for members within the region as well as represent them on the MGA board.
MGA is continuing its work on the planned region realignment to ensure all the appropriate steps are taken to make the transition.
This article appears in the March 2023 print issue.
Microsoft is adding support for Bing Chat and the other "new Bing" features to the Windows taskbar as part of 2023's first major Windows 11 feature update. Microsoft Chief Product Officer Panos Panay announced the updates in a blog post released today.
The Windows update doesn't open the new Bing preview to anyone who hasn't already signed up for it, and there's currently a waitlist for new users who want to try the feature. But if and when Microsoft expands the Bing preview to more of its users, millions of PCs that automatically install today's update will already have built-in support for it.
You can read about the other changes in the new Windows 11 updates here. Anyone running the Windows 11 2022 updates can download them manually via Windows Update starting today, and all of the new changes will roll out to those PCs automatically in March.
With Starlink speeds slowing due to a growing capacity crunch, SpaceX said a launch happening as soon as today will deploy the first "V2 Mini" satellites that provide four times more per-satellite capacity than earlier versions.
Starlink's second-generation satellites include the V2 Minis and the larger V2. The larger V2s are designed for the SpaceX Starship, which isn't quite ready to launch yet, but the V2 Minis are slimmed-down versions that can be deployed from the Falcon 9 rocket.
"The V2 Minis are smaller than the V2 satellites (hence the name) but don't let the name fool you," SpaceX said in a statement provided to Ars yesterday. "The V2 Minis include more advanced phased array antennas and the use of E-band for backhaul, which will enable Starlink to provide ~4x more capacity per satellite than earlier iterations."
Aalyria, a recent spinout from Google, is trying to build on other companies’ near successes. The company is revamping the software platform from the ambitious-yet-failed startup Loon to optimize and reconfigure networks in real time, using whatever connections are available. Aalyria is combining that tech with free-space lasers originally developed at Lawrence Livermore National Laboratory to connect the hard-to-connect. CEO Chris Taylor answered five rapid-fire questions on the company’s approach and the importance of solid engineering know-how.
Connecting remote areas has been a problem for decades—why is it so difficult to solve?
Chris Taylor: The hardest problem is how to deliver the service that solves the challenges of the digital divide, at a price point that all the people who are suffering from the digital divide can pay. This has been the age-old challenge.
Broadly, what are the two technologies Aalyria has been developing?
Taylor:Spacetime is a temporospatial, software-defined network that mashes together elements of traditional SDNs, traditional software-defined wide area networks, and then adds a digital twin of all wireless transceivers on the planet. Tightbeam is a coherent-light, free-space optics product that allows us to transfer data at high speeds, right up to the absorption lines of various atmospheric anomalies (fog, rain, snow, moisture).
How are you currently improving or refining these two key technologies?
Taylor: For Spacetime, it’s always about resiliency and how to ensure those data payloads are getting where they need to be as quickly as they can be there, and in the same form that they left. For Tightbeam, it’s always about distance and speed and atmosphere. That’s how we think about these things. Can I increase the distance? Can I increase the speed and capacity? And can I deal with the atmosphere better than anyone else to ensure that we can deliver as we said we will?
How is what Aalyria is doing different from other kinds of wireless or cell networks?
Taylor: You can autonomously find the most efficient path based on the requirements that a user has entered into the system. So we say, we need this stuff to go from A to B, to F, to Q, to R, S, T, and then to Z. And that’s the path it’s going to take. What we’ve done is said, A through Z are wired paths, if you will. But we’re now going to add all the wireless transceivers on the planet. And we’ll add all of the optical links that we can create with Tightbeam. Basically, Spacetime is an operating system that can do everything that you want it to do for all of your network, and Tightbeam is a killer peripheral. They don’t have to be used together, but they can always be used together when necessary.
How crucial is the expertise of the former Google engineers on staff for a project like Aalyria?
Taylor: When I was growing up as a kid, we didn’t yet have computers until I think I was a senior in high school, something like that. I just happened to be born at a time when the wild technological shift happened in America, and certainly in the world. Access to those technologies changed how I learned and how I saw many things in the world. It’s been wildly impactful to me. If I can work with a bunch of Googlers and Metas and Amazonians, who grew up in this age, who are experts in this technology, and can help us deliver the same experience that I’ve had growing up—without and then with—I think that shifting from without to with is wildly impactful for any human being. It would be great if the unconnected could experience that too.
This article appears in the March 2023 print issue as “5 Questions for Chris Taylor.”
In 1999 Bill Gatespenned a moving tribute to the Wright brothers. He credited their winged invention as “the World Wide Web of that era,” one that shifted the world into a global perspective. So it’s only fitting that Microsoft later became the force behind Flight Simulator.
And, like the Wrights’ original Flyer, the game’s legacy has extended beyond flight to embody the shift of perspective that flight allows. Flight Simulator promised to fit the whole world into your computer, and the game kept its promise. That’s why it has become the world’s best-selling flight-simulation franchise: The latest edition has sold more than 2 million copies.
Although 2022 marked the 40th anniversary of Microsoft Flight Simulator, its lineage stretches a few years further back than its official release, in 1982. That makes it the second-oldest video-game franchise still in active development. (The Oregon Trail came out in 1971 and is still with us.)
The heart of the franchise isn’t in gamification but in the technical spectacle it uses to simulate flight and the ground beneath you. The focus on true-to-life depiction reflects the background of the game’s developers.
Bruce Artwick studied electrical engineering at the University of Illinois at Urbana-Champaign, yet he found time to pursue a dream many teenagers think about but few fulfill: He learned to fly. It was at the university’s flight-instruction program that he met Stu Moment, who would later become his business partner.
Artwick then took a job with Hughes Aviation, in California, while continuing to work on 3D graphics in his free time. In 1977, he wrote an article for Kilobaud: The Small Computer Magazine describing the “Sublogic Three-Dimensional Micrographics Package” he had created, which brought 3D to microcomputers outfitted with the popular Motorola 6800 microprocessor. Because so many readers were keenly interested, Artwick, looking for help turning the software into a business, reconnected with Stu Moment, and together they founded subLogic.
Dave Denhart, subLogic’s second hire, recalls that the company’s early days were driven by Artwick’s 3D software, updated for emerging microcomputers. “The stuff that [Stu] and Bruce were selling was basically a 3D software package [for microcomputers],” says Denhart. “The [Tandy] TRS-80 was one of them, and I think the Apple II was out by then.”
Artwick often claimed in presentations that subLogic’s software could be used for flight simulation— a suggestion that brought gasps from the audience. A computer displaying the perspective of a pilot soaring above the planet? Most people had never seen anything like it.
These screenshots represent three succeeding generations of Microsoft Flight Simulator, beginning with subLogic’s first simulator for the Apple II [top row, left], followed by iterations that ran on Atari [striped balloon] and MS-DOS.Josef Havlik and Microsoft
Encouraged, Artwick decided to make it a reality. Flight Simulator launched in late 1979 on the Apple II and the TRS-80 with wireframe graphics and a frame rate in the single digits. It didn’t depict real airspace or modern airplanes. Instead, players flew a World War I-era biplane based on the famous Sopwith Camel. Still, its first-person 3D visuals were ahead of the curve, predating more famous hits like Atari’s Battlezone.
“I know Bruce always saw, from his early days, a potential market for [Flight Simulator],” says Denhart. “It was when microprocessors became available that I think the lightbulb went off for Bruce that said, Hey, if I put this idea I’ve got for a flight simulator onto cheaper computers, and get that to work, there’s a market for that.”
Artwick was right. In September of 1982, Computer Gaming World magazine ranked Flight Simulator as the fourth best-selling title to date. IBM, craving a showcase for its IBM PC platform, contacted subLogic about bringing Flight Simulator to the new hardware. Microsoft, deep in development of IBM PC DOS, soon called with a similar request—and better terms. It got a version with its own name on the label, though Artwick continued to own his company for years to come.
Microsoft Flight Simulator, released in late 1982, continued to improve in the months that followed, mirroring the advancements in microcomputers. The graphics moved from monochrome to color (on PCs with the right hardware), and the display refresh rate increased to 15 frames per second, which one reviewer described as “very smooth.” Players piloted a Cessna 182 in four real-world areas, including Chicago and Seattle. For the first time, a home-computer enthusiast could fly a real-looking model of an airplane across true-to-life terrain, taking off and landing at facsimiles of real airports.
The realism extended to the flight model, which made use of an effective technique: lookup tables. That’s because real-time calculations of forces on an aircraft were beyond the capabilities of early IBM PCs. Fortunately, the aircraft manufacturers had already calculated how their products would perform. This gave subLogic a cheat sheet to build on.
These screenshots, beginning with Flight Simulator 1995 for Windows [top row, left] and ending with the 2022 iteration of Flight Simulator [bottom row, right], show increasingly realistic views from the cockpit.Josef Havlik and Microsoft
“You basically say, Here’s my input, what’s my output?” says Denhart. “[The simulator] can just simply do a lookup. Now, because of resolution limitations and memory concerns, you wouldn’t have a superlarge lookup table because the processors and memory couldn’t handle that. But you’d have a sort of medium size, even small size, and between the data points you can do interpolation.”
With the flight model in place, subLogic expanded the simulation’s scope. MicrosoftFlight Simulator 2.0 (1984) modeled the entire United States, Microsoft Flight Simulator 3.0 (1988) brought the Gates Learjet 25 and substantially more airports, and Microsoft Flight Simulator 4.0 (1989) added random events, including weather.
In 1993, Microsoft Flight Simulator 5.0 brought a new killer feature: textures. Previously, land between airports had been represented by patches of color—green for forests, gray for urban, blue for water. Textures offered new details. Beginning with Flight Simulator 5.1, those details were based on satellite imagery. For many, Flight Simulator would be the first opportunity to see satellite imagery in real-time 3D software.
Flight Simulator’s first decade on the market was a time of smooth ascent. Yet there was turbulence behind the scenes, reflecting bad air that had come even earlier. Artwick and Moment often disagreed, and by around 1980 they worked separately, Moment by day, Artwick by night. The split eventually became permanent, with Artwick leaving to form the Bruce Artwick Organization (BAO) in the late 1980s. 1988’s Flight Simulator 3.0 was the last version credited to subLogic.
Perhaps this schism contributed to Artwick’s decision to sell to Microsoft in 1995. Artwick could not be reached for this article, but sources employed before and after the sale remembered it as an abrupt yet unsurprising decision. Artwick had spent nearly two decades dedicated to the business. The team was also having differences with Microsoft.
“I think ultimately Microsoft wanted full control of the product versus Bruce holding on to it,” says Paul Donlan, who became group manager at BAO the year prior to Microsoft’s acquisition. “We were a small shop, and we played by small-shop rules, and that sometimes gave the Microsoft people difficulty. It was very easy for us to say no, which frustrated them tremendously.”
Microsoft takes control
Microsoft’s purchase of Flight Simulator brought an alluring visual showcase in-house at the right time. The flashy launch of Windows 95, hosted by Jay Leno, leaned on media features to convince consumers it was time to retire MS-DOS computers and buy a Windows replacement. (This push, incidentally, is what put the game on my radar: Microsoft Flight Simulator for Windows 95 came bundled with my family’s first Windows 95 computer.)
Moving Flight Simulator to Windows was no small feat. The game was deeply rooted in MS-DOS and the increasingly arcane software development practices of the early 1980s. Windows 95 could in theory run MS-DOS applications, but this wasn’t a good fit for Flight Simulator.
SubLogic’s Denhart explains that up to this point, Flight Simulator didn’t really use Microsoft’s operating system: “You’d stick the floppy into the floppy drive, it’d boot up, and I think it ran a minimal MS-DOS, but just enough to get started. And then it basically ignored MS-DOS.” The team had also resisted early versions of Windows over concerns it would slow the simulator to a crawl.
But now that Microsoft was in charge, failure wasn’t an option.
“When we went to Flight Sim 95 there was this huge port,” recalls Donlan, who credits Mike Schroeter, now a software engineer for Lockheed’s Prepar3D simulation platform, with taking the role of point man. “I can’t speak as to how significant it was across everything, but a lot of that code was being moved out of Assembly [language] and into C. There was a tremendous workload that was involved with that.”
It was a first taste of Microsoft’s culture of relentless toil. It was also only partially successful, as reviewers found performance issues on even the quickest home PCs. The team’s concerns about Windows’ ability to handle the simulation, it turned out, weren’t unfounded. Still, Microsoft Flight Simulator for Windows 95 was the eye candy Microsoft needed to highlight Windows 95’s media prowess.
The team doubled down on visuals for Microsoft Flight Simulator 98, which again pushed the bleeding edge by adopting 3D hardware acceleration. Test lead Hal Bryan says the effort demanded long hours for testing various 3D accelerators, which had yet to settle on common standards. The tests paid off, however, and Flight Simulator 98 quelled reviewers’ complaints about pokey performance.
A Cessna 182, shown here in a 1967 photo, was the aircraft chosen for the 1982 version of Microsoft Flight Simulator.Bettmann/Getty Images
Users also benefited from the rise of CD-ROM and DVD-ROM media, which provided space for more detailed textures, more terrain data, and quicker data-transfer speeds. Jason Dent, first hired for Microsoft’s Encarta World Atlas, soon moved to assist with Flight Simulator. Satellite imagery had improved the simulator’s visuals, but its data was still coarse—“between 4 and 16 kilometers on a side,” says Dent. Entire mountains were missing from less-traveled regions. To avoid such gaps, Dent and his colleagues combined satellite images with land-use data to deliver scale and precision simultaneously.
The hard work came to fruition in Flight Simulator 2000, which reached a technical milestone: It mapped the entire planet in 1-kilometer blocks. Scot Bayless, the studio manager overseeing the team, says an early demo left Bill Gates stunned.
Bayless recalls that after explaining to Gates that the software included every airport on the planet, Gates responded by saying, “‘You’re full of shit. That’s the stupidest fucking thing I’ve ever heard.’” This was Gates’s highest form of praise, Bayless notes. “In the lore of Microsoft, if Bill says that to you, you’re made.”
And, for a time, Flight Simulator did have it made. New versions landed on best-seller charts. The team, now renamed Aces Game Studio, created or contracted spin-offs like Microsoft Combat Flight Simulator and MicrosoftSpace Simulator. There was even talk of a universal platform for general-purpose, world-scale simulation, which eventually spun into Microsoft’s Enterprise Simulation Platform. ESP lasted only a few years but was licensed by Lockheed Martin for its Prepar3D simulation platform. In retrospect, ESP feels like a predecessor to modern efforts to build “digital twins” to simulate and replicate real-world environments.
Yet Flight Simulator had a problem, and it was coming from inside the company: the Xbox game console. Launched in 2001, Xbox was built to oppose Sony’s PlayStation 2, released in 2000, which had a DVD drive and could (with an accessory) connect to the Internet. Microsoft worried that some consumers might view it as a low-cost PC alternative.
Aces Game Studio explored bringing Flight Simulator to Xbox, says Bryan. But these efforts were frustrated by the challenge of adapting keyboard and mouse controls to the game pad. Bayless believes this created a rift between the Aces studio and Microsoft, and he regrets not pushing harder for an Xbox version. “I think we would have ended up with a stronger, more flexible, more robust, more future-proof engine.”
Aces, flying solo in an Xbox-centric Microsoft Games Division, became an easy target when the financial crisis of 2008 forced company-wide layoffs. For those affected, it was a nasty surprise, but the years have allowed some of them to accept that Microsoft’s decisions made sense, because the simulator’s last iterations had arguably been stagnant, focusing on past strengths and ignoring new platforms.
The Sopwith Camel, the famous British biplane from World War I, was featured in the first Flight Simulator, released in 1979.Imperial War Museums/Getty Images
The phoenix rises from the ashes
Then came Jorg Neumann, a Microsoft veteran working on a HoloLens project called HoloTour—an immersive virtual-reality tourist guide. It included a bird’s-eye perspective of locales like Machu Picchu, in Peru. The project faced challenges, however, especially at Machu Picchu, where the team had less data than it would have liked.
“It was pretty clear that, even with on-the-ground photographs, it was superhard to do a full, nice 3D model,” says Neumann. “At which point we just said, Why don’t we go and just have a plane fly overhead and give us the lidar data and appropriate photogrammetry?” (Lidar is a laser-based technique for estimating range to an object, while photogrammetry is 3D information extracted from photographs.)
The flyover never happened, but Neumann’s perspective changed. “The idea persisted in my head. There is something there. We should try to get our game worlds augmented via aerial data.” He realized Microsoft already had the perfect application: Flight Simulator.
Neumann, using data from Bing, threw together a demo of a Cessna flying over Seattle—the same plane and city available in the original Microsoft Flight Simulator. It looked spectacular, even at that early stage. The project progressed, and Denhart’s archive proved invaluable.
“The code base and the project were really well archived,” says Neumann. The code was sent to Asobo Studio, the lead developer on the HoloTour project, and used to preserve compatibility with third-party planes designed for Flight Simulator X, the last iteration released by Aces Game Studio. The new Flight Simulator also retains a “legacy” mode that activates the old flight model, preserving a lineage tracing all the way back to 1982.
Most people flying today’s Flight Simulator will enjoy the default “modern” simulation, which models up to 1,500 flight surfaces. Airflow over each point in the simulation is determined by not only the plane’s speed and design but also environmental effects such as weather and nearby terrain. This level of simulation was unimaginable in 1982, but today it can run on any recent midrange AMD or Intel processor.
Flight Simulator promised to fit the entire world into your computer, and the game kept its promise.
Hal Bryan notes that the prior simulation fell apart in extreme situations, such as a stall and spin, so that the plane would behave in a wooden and overly predictable fashion. He knows, because that’s how he used to test the thing. The new simulation can precisely model airflow over many surfaces and can thus organically determine when a stall would begin and whether it becomes a spin.
While Bing’s data was useful in creating Flight Simulator’s world, the team still faced limitations. Quality photogrammetry data isn’t available for every inch of the ground. To fill in the gaps, Asobo used Blackshark.ai’s machine learning to convert photogrammetry data and satellite photos into a reproduction of the surface of our planet. The Blackshark.ai technology automatically creates buildings and adds them where appropriate, based on satellite photos. Machine learning also corrects color variations between photos while removing clouds and shadows.
“We wanted to have unique buildings, and basically you do this by procedural generation, which takes input from building footprints, the roof type, roof color, zoning, building density, and other information,” says Arno Hollosi, chief technology officer of Blackshark.ai. This data is then modified by “archetypes” that have styles appropriate for the geographic region. The result is a diverse range of 3D buildings that look realistic, at least from a thousand or so meters above the ground. This technique can also depict small communities and even lone rural houses and buildings, something artists could never hope to accomplish while adding buildings one by one.
Even so, the modern simulator isn’t perfect. A city street may look right, but your house probably won’t. Simulated air traffic isn’t as heavy as it is in reality. In-simulation messages from air-traffic control are often inaccurate or absent, especially at midsize airports. Weather is often stunningly beautiful, but it only vaguely mimics the real world. The solutions to these problems will, like so many of Flight Simulator’s additions and features, require new technologies.
Yet one core success is undeniable: Microsoft Flight Simulator fits the entire world in your PC. It can even fit the entire world in your pocket through Microsoft’s xCloud streaming app for smartphones, allowing anyone with a modern smartphone to load the simulator and fly (virtually) from any location in the world to any other.
“We had this ambition to get the whole world in there,” says Bayless. “And, in fact, we kind of did.”
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
CLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL
Enjoy today’s videos!
This video presents the AmphiSAW robot. The Robot relies on a unique wave producing mechanism using a single motor. The AmphiSAW is one of the fastest amphibious robots and the most energy efficient amphibious robot. Its Bio-inspired mechanism is bio-friendly as it allows the robot to swim among fish without intimidating them.
A paper on AmphiSAW appears in Bioinspiration and Biomimetics 2023.
I am pretty sure this is a fake robot arm, which means Northrop Grumman gets added to the list of “companies that really should be able to do things with real robot arms but aren’t for some reason.”
This is not a great video, but it’s a really cool idea: Hod Lipsons’ Robotics Studio course at Columbia teaches students to design, fabricate, and program robots that walk. Here are 49 of them.
There are many moments in the Waymo Driver’s day when it finds itself at a crossroads and must decide what to do in a fraction of a second. Watch Software Engineer Daylen Yang break down the challenge of intersections and what we’re doing to build a safe driver—the Waymo Driver—for every road user.
Kaitlyn Becker was working on her doctorate at Harvard University when she helped develop a soft robotic system that can handle complex objects by using entanglement grasping. She joins to explain how creatures of the sea inspired the robotic gripper and how it might be used in the future.
Semiconductor companies, which united to get the CHIPS Act approved, have set off a lobbying frenzy as they argue for more cash than their competitors.
OpenAI in November launched ChatGPT, one of the most significant real-world applications of artificial intelligence to date. The tool allows its users to quickly generate sophisticated textual content that is uniquely constructed.
Such content, therefore, likely can avoid detection by traditional plagiarism tools—which creates a concern for universities about how to assess students’ learning and skill development.
Many types of assessments used to evaluate students require them to demonstrate they have understood new materials by investigating the content and collating their learning in the form of a written essay or report. The role of an academic assessor has been to evaluate individual students’ submissions to gauge the breadth and depth of their understanding of the topic.
If students use ChatGPT to write an essay or report, the problem is the output generated provides limited, if any, representation about the quality of their learning. The AI tool offers the opportunity for well-written researched content without the need for a student to search for detailed sources.
Unfortunately, this type of problem is not new. For many years, students have been able to copy text from essay banks. Anti-plagiarism detection tools such as iThenticate and TurnItIn have deterred the use of such repositories. Although the anti-plagiarism tools have been successful, they use sophisticated pattern-matching techniques—which makes them ill-equipped to detect the language constructs resulting from advanced AI.
Another way students get around writing original content is through essay mills, which provide writing services for a fee. It could be argued that the open nature of ChatGPT has leveled the playing field between those who can and can’t afford to pay for the services.
A different assessment for STEM students
In the fields of science, technology, engineering, and mathematics, a far broader assessment strategy than essays is used. To meet the learning outcomes, STEM students must demonstrate skills such as programming. Because ChatGPT can solve many mathematical problems and generate and debug code, however, the computing field cannot simply ignore the AI evolution. And ChatGPT’s capabilities are certain to improve over time.
The immediate reaction from academia is likely to be to adopt traditional assessment strategies that comprise predominantly closed-book, exam-style assessments.
“We recommend an approach where teaching and learning adapt to recognize the opportunities posed by new technologies.”
Before adopting that obvious quick fix, though, it is important to reflect on the reasons why a broad assessment strategy was adopted in the first place.
Engineering and computing students must tackle large, complex problems and adopt collaborative strategies. The skill sets are not easily or accurately tested individually in an examination hall in a three-hour period. To some extent, essays—and, perhaps even more controversially, doctoral theses—are already not well aligned to the needs of many employers.
Issues universities need to consider
ChatGPT and similar technologies will continue to shape the future of what we call the World of Work (WoW). As employers increasingly adopt advanced AI, the academic world will need to amend its teaching and assessment practices. ChatGPT and other AI tools are already being adopted in industry as a way to automate mundane tasks. The big question is: Should educators ban such developments or embrace them?
Here are some issues that universities might want to consider.
Awareness is the best line of defense. Educate students and staff about the strengths and weaknesses of AI-generated content. For example, when does reliance on localized or peer-reviewed content matter, and when is a quick-and-dirty content review sufficient?
Develop assessment and other educational practices for the WoW to embrace or reject the use of AI-generated content. Using authentic assessment tasks that are aligned to a local context or problem, for example, would require students to foster a culture of exploration and curiosity. Project-based assessment tasks are good examples that enable students to conduct exploration of ChatGPT and similar tools, but they ultimately demonstrate the learning and skill on their own. Assessment criteria also will need to recognize sophisticated use of AI but ensure greater recognition for elements that demonstrate higher-order skills such as evaluation and synthesis.
Reassure staff that new tools are emerging to detect the use of AI. Princeton student Edward Tian has already created one such tool: GPTZero, which adopts thinking similar to OpenAI’s tool but uses deep learning in reverse to detect ChatGPT.
Adapt to embrace opportunities brought on by innovation. Consider using ChatGPT and similar tools to advance pedagogy and curricula. Everyone finds unnecessary repetition and menial tasks tiresome. Use the tools to stretch and enable more innovation by students.
Reinforce principles of professional standards and ethics. Advance a culture of academic integrity, acknowledging that new tools will emerge. If ethical culture is engraved in the group ethos, students and scholars will use new AI tools appropriately.
ChatGPT and similar tools should be seen as accelerating necessary change. We recommend an approach where teaching and learning adapt to recognize the opportunities posed by new technologies and continue to foster a culture of exploration and curiosity. Ultimately, our priority is to provide graduates ready to face the ever-changing WoW.
The views expressed here are the authors’ own and do not represent positions of IEEE Spectrum, The Institute, or IEEE. The authors write in their personal capacity.
David Wakeling, head of London-based law firm Allen & Overy's markets innovation group, first came across law-focused generative AI tool Harvey in September 2022. He approached OpenAI, the system’s developer, to run a small experiment. A handful of his firm’s lawyers would use the system to answer simple questions about the law, draft documents, and take first passes at messages to clients.
The trial started small, Wakeling says, but soon ballooned. Around 3,500 workers across the company’s 43 offices ended up using the tool, asking it around 40,000 queries in total. The law firm has now entered into a partnership to use the AI tool more widely across the company, though Wakeling declined to say how much the agreement was worth. According to Harvey, one in four at Allen & Overy’s team of lawyers now uses the AI platform every day, with 80 percent using it once a month or more. Other large law firms are starting to adopt the platform too, the company says.
The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.
The state has become a hub for chip makers including Intel and TSMC, as the government prepares to release a gusher of funds for the strategic industry.
The case, concerning a law that gives websites immunity for suits based on their users’ posts, has the potential to alter the very structure of the internet.
The fifth/sixth generation (5G/6G) mobile networks are designed and implemented to support the growth of many applications. From delivering a new entertainment experience, serving as the backbone of intelligent autonomous mobility, revolutionizing healthcare, and propelling manufacturing into a new era of smart connected factories and products tying together billions of cellular devices, millions of autonomous vehicles, and trillions of sensors. 5G/6G will significantly increase performance and efficiency over the previous mobile networks.
This new performance allows new, larger applications to flourish along with next-generation mobile-to-mobile communications. Next-generation connectivity – powered by communications systems like 5G/6G – will radically transform all industry sectors, and the impact could boost global GDP by more than $2 trillion by 2030. Additionally, millimeter wave communication has become one of the most attractive techniques for 5G systems implementation since it has the potential to achieve these requirements and enable multi-Gbps throughput.
Beam-steerable high gain phased array antenna design is the key component for 5G/6G cellular systems. This will influence the capacity of cellular networks by enhancing the signal-to-interference ratio (SIR) using narrow transmit beams offering sufficient signal power at the receiver terminal at a greater distance. Several antenna array configurations were investigated for 5G applications, such as patch antennas, printed microstrip antennas, and cylindrical conformal microstrip antennas. 5G systems can use adaptive beamforming antenna arrays by enabling the technology of multi-user massive MIMO, which can achieve more efficient usage of the radiated power.
What You Will Learn
- Phased Antenna Array Modeling & Optimization for 5G/6G deployment
- Antenna system coverage and beamforming techniques evaluation
China’s tech giants including Baidu, Alibaba and NetEase are racing to match the west’s recent developments in artificial intelligence, touting projects that they hope will achieve the same buzz created by the release of ChatGPT.
After months of announcing cost cuts and headcount reductions, big groups are now optimistically presenting investment plans to rival OpenAI’s chatbot, while trademark trolls are lining up to claim words related to ChatGPT’s achievements.
Zhou Hongyi, head of Internet security company Qihoo 360, characterized ChatGPT, a program that produces realistic text answers to questions posed by humans, as the start of the artificial intelligence revolution. “It has shortcomings but also unlimited potential,” he said in a talk-show discussion last week.
With over 26,000 followers, Jos Avery's Instagram account has a trick up its sleeve. While it may appear to showcase stunning photo portraits of people, they are not actually people at all. Avery has been posting AI-generated portraits for the past few months, and as more fans praise his apparently masterful photography skills, he has grown nervous about telling the truth.
"[My Instagram account] has blown up to nearly 12K followers since October, more than I expected," wrote Avery when he first reached out to Ars Technica in January. "Because it is where I post AI-generated, human-finished portraits. Probably 95%+ of the followers don't realize. I'd like to come clean."
Avery emphasizes that while his images are not actual photographs (except two, he says), they still require a great deal of artistry and retouching on his part to pass as photorealistic. To create them, Avery initially uses Midjourney, an AI-powered image synthesis tool. He then combines and retouches the best images using Photoshop.
Twitter announced Friday that as of March 20, it will only allow its users to secure their accounts with SMS-based two-factor authentication if they pay for a Twitter Blue subscription. Two-factor authentication, or 2FA, requires users to log in with a username and password and then an additional “factor” such as a numeric code. Security experts have long advised that people use a generator app to get these codes. But receiving them in SMS text messages is a popular alternative, so removing that option for unpaid users has left security experts scratching their heads.
Twitter's two-factor move is the latest in a series of controversial policy changes since Elon Musk acquired the company last year. The paid service Twitter Blue—the only way to get a blue verified checkmark on Twitter accounts now—costs $11 per month on Android and iOS and less for a desktop-only subscription. Users being booted off of SMS-based two-factor authentication will have the option to switch to an authenticator app or a physical security key.
The justices are set to hear a case challenging Section 230, a law that protects Google, Facebook and others from lawsuits over what their users post online.
A human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence.
Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.
The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today’s widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI.