Apple has rolled out the latest iOS 18.5 update, bringing a significant upgrade for iPhone 13 users—emergency satellite connectivity. Previously exclusive to iPhone 14 and newer models, this life-saving feature is now accessible to a wider audience. The satellite-based system is designed to help users stay connected in critical situations where Wi-Fi or cellular networks are unavailable. It enables emergency SOS messaging via satellite, allows users to text contacts, share their location, and even request roadside assistance—all without a standard signal. With iOS 18.5, Apple is extending support to work with satellite services from select carriers, including T-Mobile’s collaboration with Starlink. Users can check their device’s compatibility and availability of satellite services by navigating to the Settings > Cellular section on their iPhones. This satellite feature has already proven its value in real-world emergencies—credited with helping rescue efforts during wildfires, locating missing hikers, and saving lives when traditional communication methods failed. In addition to this headline feature, iOS 18.5 brings several smaller but useful enhancements: Alongside iOS 18.5, Apple also released updates for other platforms, including: With this update, Apple continues to emphasize user safety and experience, bringing high-end features to older devices and streamlining daily functionality. Source: (Techcrunch)
Apple’s Vision for 2027: A Curved Glass iPhone
As Apple gears up to celebrate the iPhone’s 20th anniversary in 2027, it appears the tech giant has big plans in store—ones that could redefine the future of mobile and smart technology. According to Bloomberg’s Mark Gurman, Apple is preparing a bold redesign for a future iPhone, featuring a sleek, mostly glass construction with a curved form and no display cutouts, marking a significant departure from current models. A Glimpse at the Curved Glass iPhone Apple’s rumored 2027 iPhone could introduce a nearly seamless, edge-to-edge display by embedding the front-facing camera beneath the screen. This design move would eliminate the need for display notches or punch-holes, presenting a true full-screen experience. Gurman describes the device as “mostly glass and curved,” suggesting Apple may be developing a wraparound or loop-style design—a concept hinted at in previously filed patents. Though companies like Samsung and Vivo have experimented with curved edge displays, Apple seems to be exploring a unique path that aligns with its minimalist, functional aesthetic. If realized, this innovation could mark one of the most radical design shifts in the iPhone’s history. Beyond the iPhone: A 2027 Apple Tech Surge The 2027 milestone won’t just be about iPhones. Gurman hints at a broader wave of Apple innovation set for that year: What It All Means If these predictions come to pass, 2027 could be a landmark year in Apple’s timeline—not just for commemorating two decades of the iPhone, but for showcasing the company’s next leap forward in design, AI, and consumer tech. While Apple has kept many of these projects under wraps, this wave of innovation aligns with the company’s history of waiting to perfect emerging technologies before bringing them to market. As the race to dominate the AI-driven future accelerates, Apple is signaling that it’s not only in the game—it’s planning to change it. Source: (The Verge)
Microsoft and OpenAI Redefining Their Billion-Dollar Partnership
A potential shakeup could be underway in one of the tech industry’s most high-profile collaborations. According to reports from the Financial Times, Microsoft and OpenAI — partners in AI advancement and innovation — are now engaged in complex negotiations that could reshape the nature of their relationship. Microsoft, which has poured over $13 billion into OpenAI, is reportedly at the center of discussions over the AI startup’s new corporate structure. OpenAI plans to restructure into a for-profit public benefit corporation, but intends to retain oversight under its existing nonprofit board — a structure that requires key investor approval. At the heart of the conversation is equity: how much Microsoft will receive in the reorganized for-profit entity. But the talks don’t stop there. Both companies are also re-evaluating the broader terms of their collaboration, including a proposal for Microsoft to give up some of its current equity in return for extended access to OpenAI’s technologies beyond the current 2030 agreement limit. This negotiation comes at a time when OpenAI’s own enterprise ambitions are growing, and its once clear-cut reliance on Microsoft is becoming more nuanced. OpenAI’s expansion into the cloud and enterprise space has positioned it as a potential competitor to Microsoft in some arenas — complicating what was once a purely symbiotic alliance. As OpenAI moves forward with large-scale projects like its “Stargate” AI infrastructure, the power dynamic in its partnership with Microsoft is shifting. Microsoft remains a crucial player in the AI ecosystem — thanks to its integration of OpenAI models into services like Azure and Copilot — but OpenAI’s rising independence signals that future collaboration may look very different. While neither company has publicly confirmed the details, the Financial Times report paints a picture of strategic repositioning. The outcome of these talks could have significant ripple effects for enterprise AI, cloud services, and the competitive landscape in generative AI. Source: (Techcrunch)
Amazon’s “Vulcan” Robot Signals a New Era of Human Jobs in an AI-Driven World
As artificial intelligence continues reshaping the workplace, tech companies face growing scrutiny over the role humans will play in this new landscape. In a world where machines are increasingly handling repetitive tasks, what’s left for the people currently doing those jobs? Amazon has offered a rare glimpse into this future with the introduction of Vulcan, a warehouse robot designed to handle physically strenuous picking tasks — ones that often involve awkward bending or climbing. This isn’t just about efficiency; it’s also a shift in how human labor is valued and repurposed. Unlike earlier generations of warehouse bots, Vulcan is equipped with a sense of “touch,” allowing it to better grasp a wider variety of items. It’s already being used to handle 75% of customer orders by managing goods stored on hard-to-reach shelves. The remaining tasks, including items stored at mid-level or those the robot still can’t handle, are done by humans. But Amazon isn’t just swapping people for machines. It’s also launching initiatives to retrain warehouse workers into roles like robotics technicians, reliability engineers, and floor monitors — new positions born from the very technology that’s replacing traditional labor. This hybrid approach reflects a broader belief in the tech industry: that automation will create more jobs than it destroys — even if the new roles require different skills. The World Economic Forum predicts that while 92 million jobs may be displaced by AI and automation, 170 million new ones could emerge. Still, retraining a workforce at scale is a major challenge. Not every worker will have the interest or aptitude to become a robot tech. And as Amazon’s case shows, the transition won’t be one-to-one — far fewer people are needed to maintain and monitor bots than to do manual warehouse picking. Amazon’s retraining efforts are significant because they move beyond vague promises and into tangible programs. For now, only a small number of workers are being trained for robot-related roles, but it hints at what could become a common template: humans supervising and maintaining machines, rather than being replaced by them wholesale. However, there’s also skepticism about how widespread such transformations will be. Not all companies have Amazon’s resources to deploy or maintain advanced robotics. Many industries — especially smaller retail and food service businesses — may remain human-dependent for years or even decades. Amazon’s past attempt to automate retail with its “Just Walk Out” technology met limited success, partly due to its reliance on human video reviewers and low adoption outside Amazon-owned stores. Amazon’s Vulcan robot might not be the end of warehouse jobs — it could be the beginning of a new kind of job altogether. The big question is whether companies will genuinely invest in preparing the current workforce for these new roles or if only a select few will benefit from the AI revolution. Source: (Techcrunch)
Microsoft Bans Employee Use of DeepSeek App
In a recent U.S. Senate hearing, Microsoft President Brad Smith announced that the company has prohibited its employees from using the AI chatbot application DeepSeek. The decision stems from apprehensions regarding data security and the potential dissemination of Chinese state-sponsored propaganda. Reasons Behind the Ban DeepSeek, developed by a Chinese startup, has been under scrutiny due to its data handling practices. The application’s privacy policy indicates that user data is stored on servers located in China. Under Chinese law, companies are required to cooperate with national intelligence agencies, raising concerns about the confidentiality and security of user information. Additionally, DeepSeek has been reported to censor topics deemed sensitive by the Chinese government, aligning its responses with state narratives. Smith emphasized that these factors contribute to the risk of user data being accessed by Chinese authorities and the potential for the app to propagate government-influenced content. Consequently, Microsoft has also refrained from listing DeepSeek in its app store. Microsoft’s Engagement with DeepSeek’s Technology Despite the ban on the application, Microsoft has engaged with DeepSeek’s underlying technology. The company incorporated DeepSeek’s R1 model into its Azure cloud service after conducting rigorous safety evaluations. Smith mentioned that Microsoft modified the AI model to mitigate “harmful side effects,” although specific details about these alterations were not disclosed. It’s noteworthy that while DeepSeek’s application is restricted, its open-source nature allows entities to deploy the model on their own servers, potentially bypassing direct data transmission to China. However, concerns remain about the model’s propensity to generate content aligned with Chinese propaganda or produce insecure code. Microsoft’s stance reflects a growing trend among organizations and governments to scrutinize and, in some cases, restrict the use of foreign-developed AI applications due to national security concerns. Other countries, including Australia and Italy, have implemented similar bans or restrictions on DeepSeek. The situation underscores the complexities of global AI development and deployment, where technological advancements intersect with geopolitical considerations. As AI continues to evolve, ensuring data security and content integrity remains a paramount concern for both developers and users worldwide. Source: (Techcrunch)
Meta’s “Super-Sensing” Smart Glasses: The Next Leap in AI-Powered Wearables
Meta is pushing the boundaries of wearable technology with its latest project: AI-powered smart glasses that are designed to understand and interact with your everyday life in real time. With facial recognition capabilities and context-aware reminders, these next-generation devices signal a bold step toward integrating artificial intelligence into daily routines. According to reports, Meta is currently working on two new pairs of smart glasses, codenamed Aperol and Bellini, set for release next year. These wearables aim to go beyond typical smart functions by incorporating what Meta calls “super-sensing” vision software. This technology is being designed to recognize people by name, thanks to AI-powered facial recognition. But it’s not just about identifying faces — the goal is to create a wearable assistant that’s deeply aware of your surroundings and habits. A key feature in development is the “live AI” system, which can be activated with the phrase “Hey Meta, start live AI.” Once active, the system uses built-in cameras and sensors to monitor your actions and environment continuously. This enables real-time contextual assistance — for example, it might notice you’re leaving home without your keys or remind you to grab groceries on your walk back. This AI goes far beyond passive information delivery. It’s designed to track behavior and respond dynamically, functioning more like a personal assistant than a traditional gadget. The always-on nature of this system comes with one major hurdle: battery life. When tested on existing glasses, the live AI system reportedly drains power in just 30 minutes. Meta is working to extend this performance significantly, aiming for new models — and potentially camera-equipped earphones — to operate for several hours per charge. Achieving this will require careful balancing of hardware design, software efficiency, and power management — all while maintaining user comfort and style. The privacy implications of always-on cameras and facial recognition are substantial. In response, Meta has updated its internal processes for evaluating the safety and ethical risks of its products, aiming to release innovations faster while still addressing public concerns. Given the backlash Meta has faced in the past regarding privacy and data handling, this new wave of AI enabled wearables is sure to reignite debates around surveillance, consent, and personal data protection. Meta’s vision for wearable technology is ambitious: a world where your devices don’t just listen, but see, understand, and help. With super-sensing capabilities and continuous AI support, smart glasses could soon become intelligent companions — blending seamlessly into our daily lives. As the tech industry watches closely, the coming year may reveal whether Meta can turn this futuristic concept into a practical, trusted reality. Source: (The Verge)
Is ChatGPT Smart Enough to Pass the Turing Test?
Artificial intelligence continues to evolve rapidly, with tools like ChatGPT becoming increasingly lifelike in how they communicate. But the question remains: are these AI systems sophisticated enough to pass the Turing Test — a decades-old benchmark for machine intelligence? Understanding the Turing Test Proposed by Alan Turing in the mid-20th century, the Turing Test is a simple yet profound concept. It involves a human evaluator engaging in conversations with both a machine and a real person, without knowing which is which. If the evaluator can’t reliably tell the machine from the human, the AI is said to have passed the test. While passing the Turing Test doesn’t prove an AI is truly intelligent or self-aware, it does suggest that the machine can convincingly replicate human-like conversation — at least in certain contexts. Recent research suggests that large language models (LLMs) like ChatGPT-4 and GPT-4.5 are increasingly able to pass the Turing Test. A study by UC San Diego found that GPT-4 was mistaken for a human 54% of the time. When the updated GPT-4.5 was tested, it performed even better — being judged human 73% of the time, actually outpacing real people in the study, who were correctly identified 67% of the time. Further supporting evidence comes from the University of Reading. In an experiment where GPT-4 generated responses for undergraduate take-home assignments, only one out of 33 AI-generated entries was flagged by the graders. The rest earned above-average marks without suspicion. Do LLMs Actually “Think”? Despite their performance, LLMs don’t truly think. They lack self-awareness, emotions, or beliefs. Instead, they generate responses by analyzing vast amounts of data and predicting what text is most likely to come next in a conversation. This process is driven by statistical probability rather than conscious reasoning or understanding. So while an AI might sound human, it’s essentially mimicking language patterns without any true comprehension of the ideas it expresses. Although it’s historically significant, some experts argue that the Turing Test may no longer be the best measure of AI intelligence. Cognitive scientist Gary Marcus has criticized it as a reflection of human susceptibility rather than a robust test of machine capability. As AI becomes more integrated into tasks that go far beyond conversation — from decision-making to autonomous systems — new benchmarks are needed to assess real-world intelligence, adaptability, and ethical reasoning. ChatGPT and similar models are undeniably advancing to the point where they can often fool people into thinking they’re human. In that sense, many do “pass” the Turing Test — at least some of the time. But whether this means they possess true intelligence is still up for debate. As AI continues to grow in complexity, the Turing Test may remain a useful, if limited, way to gauge progress. But the future of AI evaluation likely lies in more nuanced, multifaceted benchmarks that can assess performance across a range of human-like abilities. Source: (Mashable)
Google I/O 2025: Major AI and Android Innovations Unveiled
In mid‑May, Google I/O 2025 will shift its spotlight squarely onto artificial intelligence, from deep dives into its latest Gemini models to fresh AI‑powered consumer features. While Android 16 won’t be the star of the main keynote, its revamped “Material 3 Expressive” design and new usability tweaks will roll out both in preview and in a dedicated Android Show the week prior. Wear OS 5.1 arrives under the radar with tighter integration into Android 15, and Google’s renewed push into AR/VR comes via Android XR—backed by its first Project Moohan headset. On the AI front, developers will get expanded access to Gemini 2.5 Pro, sneak peeks at Google’s universal assistant prototype “Project Astra,” and a mobile release of NotebookLM for on‑the‑go research. Google I/O’s main address takes place on May 20, 2025 at 10 a.m. PT, livestreamed from Mountain View’s Shoreline Amphitheatre (New York 1 p.m., London 6 p.m., Mumbai 11:30 p.m.) (Google I/O). The event runs through May 21, featuring sessions on AI, developer tools, and platform updates. Android 16 & Material 3 Expressive Android’s next major update, Android 16, debuts a bolder “Material 3 Expressive” design that emphasizes vibrant colors, larger tappable areas, and animated icons to improve both aesthetics and usability. Google accidentally leaked these details in a blog post that was swiftly removed, revealing performance gains in interface comprehension and accessibility for all age groups. The Android Show: I/O Edition Rather than wait for the main conference, Google will host The Android Show: I/O Edition on May 13 at 10 a.m. PT, giving fans an early look at Android 16 features—Auracast Bluetooth improvements, summarized notifications, and Pixel‑exclusive enhancements—hosted by Android Ecosystem President Sameer Samat (Android). Wear OS 5.1 Sneak Peek Under the radar, Wear OS 5.1 launched in March. Built atop Android 15, it introduces Credential Manager support for seamless password and passkey authentication, revamped media controls, and improved step tracking for Pixel Watch users. Expect further Wear OS news in I/O breakouts. Android XR & Project Moohan Google’s fourth attempt at immersive OSes, Android XR, is designed for AR/VR headsets and optimized with Gemini AI in mind. Samsung’s Project Moohan glasses, rumored to ship in 2025, will be the first consumer device on XR. Details on new hardware partners and developer tools will surface at I/O. Generative AI & Gemini Google will showcase Gemini 2.5 Pro Preview, now available to developers, with enhanced coding prowess for front‑end UI workflows and agentic automation tasks. Expect in‑depth sessions on performance benchmarks, real‑world integrations (e.g., in Workspace, Search, and mobile), and strategies for on‑device inference that underlie Google’s “AI‑first” roadmap. Looking Ahead: Project Astra & NotebookLM Mobile Beyond Gemini, Google’s Project Astra prototype envisions a universal AI assistant that understands context across devices, from phones to experimental glasses (Google DeepMind). Additionally, the NotebookLM research assistant is rolling out a standalone mobile app, enabling users to upload documents, ask follow‑up questions conversationally, and receive audio overviews on the go Source: (Mashable)
Upcoming ASUS ROG Ally 2 Variants Leak Ahead of Launch
Leaked regulatory photos have unveiled the upcoming Asus ROG Ally 2 handheld gaming devices, including a new Xbox-branded model developed in collaboration with Microsoft under the codename “Project Kennan.” This Xbox version features a distinctive black design and a dedicated Xbox logo button, differentiating it from the standard white model. The FCC and Indonesian certifications confirm two hardware variants: an Xbox edition powered by an 8-core, 36W AMD Ryzen Z2 Extreme CPU and a standard model with a 4-core, 20W chip. Both versions maintain a 7-inch, 120Hz LCD display similar to the first-generation ROG Ally. The design includes updated, molded controller grips for improved ergonomics, making the device bulkier but potentially more comfortable. With certifications in progress, a launch is expected soon, possibly around Computex starting May 20th or Microsoft’s Build conference on May 19th. (The Verge) The black Xbox-branded variant is anticipated to come preloaded with the Xbox PC app, granting users access to services like PC Game Pass and Xbox Play Anywhere titles. This aligns with Microsoft’s broader strategy to integrate Xbox and Windows platforms, aiming for a unified gaming ecosystem. While the official release date remains unconfirmed, the timing of these leaks and certifications suggests that an announcement could coincide with major industry events in late May. Gamers and tech enthusiasts should stay tuned for official updates from Asus and Microsoft in the coming weeks. Source: (The Verge)
Fidji Simo Joins OpenAI as CEO of Applications
Fidji Simo, the current CEO of Instacart, is set to join OpenAI later this year as the company’s CEO of Applications. Simo, who has been a board member at OpenAI since March 2024, will transition out of her role at Instacart over the next few months and will continue to serve as Chair of Instacart’s board during this period. (Reuters) In her new position, Simo will oversee OpenAI’s Applications division, which encompasses the business and operational teams responsible for bringing the company’s research to the public. OpenAI CEO Sam Altman emphasized that Simo’s leadership will allow him to focus more on research, computational development, and AI safety as the organization advances towards achieving superintelligence. Simo brings extensive experience in product management and monetization from her previous roles at eBay and Meta. At Meta, she led significant product developments, including Facebook Live and Facebook Watch, and rose to become the head of the Facebook app . After joining Instacart in 2021, she played a pivotal role in taking the company public in 2023. (Wikipedia) In a statement, Simo expressed her enthusiasm for joining OpenAI at a critical moment, highlighting the organization’s potential to accelerate human potential at an unprecedented pace. She emphasized her commitment to shaping AI applications toward the public good. This leadership change follows OpenAI’s recent decision to reverse a major restructuring plan, maintaining its nonprofit parent’s control and likely reducing Altman’s power. Source: (Techcrunch)