OpenAI Replaces Assistants API with Responses API OpenAI has introduced the Responses API, a new interface for developers building AI agents capable of tasks such as web browsing, internal document analysis, and software tool interaction. This API replaces the now-deprecated Assistants API, which will be fully phased out in the first half of 2026. The shift signals OpenAI’s broader move toward an agent-first architecture, enabling more autonomous and context-aware workflows for enterprise and consumer applications. Specialized AI Agents Priced as High as $20,000/Month According to a report from The Information, OpenAI is preparing a suite of premium AI agents targeting enterprise customers. Offerings include: These price points reflect the company’s efforts to monetize its platform after reporting significant operating losses, estimated at $5 billion in the previous fiscal year. ChatGPT macOS App Now Supports Direct Code Editing OpenAI’s ChatGPT app for macOS now allows users to edit code directly in integrated environments such as Xcode, VS Code, and JetBrains. This feature is currently limited to Plus, Pro, and Team subscribers. Support for Enterprise and Education tiers is expected in the coming months. ChatGPT Weekly User Count Reaches 400 Million A recent report from a16z revealed that ChatGPT’s weekly active users increased from 200 million in August 2024 to over 400 million by February 2025. This growth has been attributed to the launch of GPT-4o, a faster and more multimodal version of OpenAI’s flagship model. OpenAI Cancels o3 Model in Favor of GPT-5 OpenAI has scrapped the planned release of its “o3” model. The underlying advancements will instead be integrated into a unified GPT-5 model, intended to streamline development and deployment across ChatGPT and API integrations. GPT-4o Reduces Prompt Energy Usage by Nearly 90 Percent New research from Epoch AI indicates that GPT-4o requires significantly less energy per prompt than its predecessors—approximately 0.3 watt-hours per request. While this figure excludes multimodal inputs, it represents a major step forward in energy efficiency. o3-Mini Update Introduces Greater Transparency An update to the o3-mini model now offers users better insight into the AI’s reasoning process. This change allows for clearer breakdowns of how responses are generated, which may improve trust and understanding in both casual and technical use cases. ChatGPT Adds Web Search Without Login Web search via ChatGPT is now available to users who are not logged in. This capability is currently limited to the browser version of ChatGPT and is not yet available through mobile apps. Deep Research Agent Released OpenAI has rolled out a Deep Research Agent designed for complex analytical tasks. It is aimed at users in academia, market research, and policy analysis who need more than surface-level summaries and require verifiable, multi-source output. Additional Product and User Updates Operator Agent in Limited Preview OpenAI is currently testing an Operator Agent, a tool that can complete browser-based workflows. Early access appears tied to users subscribed to a newly introduced $200/month Pro plan, according to code leaks and user reports. Mobile User Base Skews Young and Male Appfigures data indicates that approximately 85 percent of ChatGPT mobile users are male, with the majority under the age of 25. The next-largest age bracket is 50–64, suggesting a bimodal adoption curve. Task Scheduling Feature Added ChatGPT now includes a task scheduling feature that supports recurring reminders and calendar integration. This update is being rolled out to users on Plus, Team, and Pro tiers, with enterprise deployment expected later. Trait-Based Customization in Testing A personality trait system is currently being tested, allowing users to configure ChatGPT’s tone and behavior (e.g., “formal,” “chatty,” or “youthful”). This functionality remains limited to select users during the initial rollout. Policy and Regulatory Developments Operator Data May Be Retained Up to 90 Days OpenAI has disclosed that data processed by the Operator tool may be stored for up to 90 days, exceeding the 30-day retention window currently applied to standard ChatGPT interactions. ChatGPT Gov Released for U.S. Federal Agencies A new version of ChatGPT, branded ChatGPT Gov, has been released to meet compliance standards required by U.S. government institutions. The product mirrors ChatGPT Enterprise but includes additional data governance features. Teen ChatGPT Usage for Homework Has Doubled According to a survey conducted by Pew Research, 26 percent of U.S. teens now use ChatGPT for schoolwork—up from 13 percent two years ago. While most teens consider it a useful tool for research, concerns remain about misinformation and over-reliance. Legal and Ethical Concerns Persuasion Research Conducted via Reddit OpenAI has conducted persuasion-related experiments using the r/ChangeMyView subreddit. The objective is to measure the model’s ability to influence opinions in comparison to human participants, raising questions around AI’s role in argumentation and public discourse. Legal Risks: Libel and Harmful Instructions Incidents involving ChatGPT generating harmful or false outputs have drawn legal scrutiny. In one case, an Australian mayor considered legal action after the model falsely claimed he had been imprisoned. Separately, ChatGPT was linked to instructions for criminal activities shared on Discord, prompting moderation challenges. Updated Data Privacy Tools Available OpenAI has updated its privacy request form, allowing users in eligible jurisdictions to submit requests to delete data, opt out of training, or restrict data processing. These changes aim to increase transparency and comply with international data protection standards. Sources : ( Techcrunch )
Meta Tests AI Photo Suggestions in Facebook App
AI-Generated Suggestions from Your Photos Meta is testing a new Facebook feature that leverages AI to analyze and suggest creative ideas from users’ phone camera rolls—even if the photos haven’t been uploaded to the platform. These suggestions may include collages, recap videos, AI restyling, and themed highlights such as birthdays or graduations. How It Works When some users attempt to add media to Facebook Stories via the mobile app, a pop-up appears asking if they’d like to enable “cloud processing” for their camera roll. According to a screenshot shared by TechCrunch, the message states: “Get ideas like collages, recaps, AI restyling or themes like birthdays or graduations. To create ideas for you, we’ll select media from your camera roll and upload it to our cloud on an ongoing basis, based on info like time, location, or themes.” Meta emphasizes that the generated suggestions are only visible to the user unless they choose to share them. Data Usage and Privacy Tapping “Allow” on the prompt opts the user into Meta’s AI Terms, which permit analysis of the media, facial features, and metadata like date and object presence. Meta clarifies that this data will not be used for ad targeting. Additionally, Meta assures users that: Limited Rollout and Testing Regions According to Meta spokesperson Maria Cubeta, the feature is currently being tested with a small number of users in the United States and Canada. Not all users are seeing the option, as this is part of a controlled test. Cubeta stated: “We’re exploring ways to make content sharing easier by testing suggestions of ready-to-share and curated content from a person’s camera roll. These suggestions are opt-in only, only shown to you unless you decide to share them, and can be turned off at any time.” What’s Next? Though the feature is still in early testing, it signals Meta’s continued push to integrate generative AI into everyday user experiences. Whether it becomes a staple of the Facebook app remains to be seen, especially as user reactions—and privacy concerns—begin to surface. Meta has not issued further comments on the feature’s timeline or broader rollout plans. sources : ( Mashable )
UK Regulator May Force Google to Rank Businesses Fairly
Google Photos Integrates Classic Search with AI for Faster, Smarter Results The revamped Ask Photos feature aims to combine performance with intelligence ALSO SEE : Google Offers Buyouts Amid Restructuring Push Reintroducing Ask Photos Google has resumed the rollout of its AI-powered Ask Photos feature in Google Photos after pausing it due to performance issues. Originally announced at Google I/O, Ask Photos allows users to search their photo library using everyday language, powered by Google’s Gemini AI model. The AI analyzes both the visual content of photos and associated metadata—such as dates, locations, and recognized faces—to return more relevant search results. Initial Rollout Faced User Frustrations Shortly after its initial launch, users reported several problems: slow response times, inaccurate results, and poor overall experience. In response, Google Photos product manager Jamie Aspinall posted on X (formerly Twitter), stating: “Ask Photos isn’t where it needs to be, in terms of latency, quality and UX.” As a result, Google paused the feature’s release to make key improvements. What’s New in the Updated Version To address these issues, Google has now blended the classic photo search engine with the Ask Photos AI system. This update brings a two-tiered search experience: This hybrid approach reduces wait times while maintaining the intelligence of AI-enhanced search. How the Enhanced Experience Works For example, a search for “white dog” might instantly display several matching images. A few seconds later, Ask Photos may add additional results that recognize the dog by name (if labeled) and include contextual information like the earliest appearance of the pet in your photo history. This layered result delivery provides both speed and depth without overwhelming the user. Flexibility for Users Recognizing that not all users may prefer AI assistance, Google has kept the option to switch back to the classic search experience. This ensures that users can search in the way they find most comfortable and efficient. Availability and Requirements The updated Ask Photos feature is once again rolling out to users in the United States. To access it, users must meet the following criteria: With this relaunch, Google is aiming to offer the best of both worlds—retaining the speed and reliability of classic search while gradually enhancing it with the power of AI. Ask Photos now promises a more responsive, intuitive way to rediscover memories through your digital photo collection. For more technology news and product updates, follow TechXNow. sources : ( Techcrunch )
Chrome for Android Adds Bottom Address Bar Option
Chrome for Android Now Lets Users Move the Address Bar to the Bottom New design feature gives users more control over their browsing experience, enabling easier one-handed navigation by allowing the address bar to be repositioned to the bottom of the screen—following a similar move by Safari on iOS. Google Embraces Bottom Navigation for Better Usability Google has officially introduced the ability to move the Chrome address bar to the bottom of the screen on Android devices. This change, which mirrors Apple’s earlier move with Safari in iOS 15, is aimed at improving accessibility and comfort during one-handed browsing. A Feature Inspired by Apple, But Handled Differently While Apple made this design shift in 2021, Google previously brought the bottom address bar option to Chrome for iOS. Now, it’s extending the same flexibility to Android. Unlike Apple’s initial approach, Google is making this a user-controlled setting from the start, rather than a forced change. How to Enable the Bottom Address Bar in Chrome for Android Users can switch the address bar position in two simple ways: Why This Matters for Mobile Users This update is a usability win, particularly for users of large-screen smartphones. By placing the address bar closer to where thumbs naturally rest, Chrome becomes more ergonomic and user-friendly—especially for one-handed use. Learning from Safari’s Stumble Apple faced backlash when it first introduced the floating address bar in Safari, which interfered with site elements and disrupted the browsing experience. After user feedback, Apple revised the feature, anchoring the bar below the content and making it optional—an approach Google appears to be following from the outset. Gradual Rollout to Android Devices Google confirmed that the new feature begins rolling out today and will reach all Android Chrome users in the coming weeks. Sources : ( Techcrunch )
UK Regulator May Force Google to Rank Businesses Fairly
CMA Targets Fairer Search Rankings and More Consumer Choice Britain’s Competition and Markets Authority (CMA) is preparing to enforce stricter regulations on Google, aiming to promote fairer business rankings in search results and ensure greater consumer access to alternative digital services. This initiative marks the first potential use of the CMA’s expanded powers to oversee major tech platforms. Strategic Market Status Could Reshape Google’s Role The CMA plans to designate Google—owned by Alphabet Inc.—with Strategic Market Status (SMS) by October. If confirmed, this status would grant the regulator significant authority to intervene in Google’s operations, particularly its search services. The objective is to make the digital market more competitive, thereby boosting innovation and economic development in the UK. ALSO SEE : Apple Brings Back Tabs to Photos App in iOS 26 What the Regulation Might Change Key changes under consideration include: These steps are designed to lower entry barriers for smaller tech firms and reduce Google’s influence over the search and digital advertising markets. Google Warns Against “Punitive” Measures In response, Google has expressed concerns. Oliver Bethell, Senior Director for Competition at Google, cautioned against “punitive regulation,” arguing it could deter the company from introducing new services and features in the UK. He called for a “proportionate, evidence-based” approach to regulation. Balancing Innovation and Oversight CMA Chief Executive Sarah Cardell acknowledged Google’s dominant position—handling over 90% of UK search queries—and the platform’s benefits for users and businesses. However, she emphasized the need for stronger competition to encourage innovation and long-term growth. The UK’s approach differs from the European Union’s broader digital regulatory framework. Post-Brexit, Britain is attempting to balance tech oversight with economic opportunity. AI in Focus, But Gemini Excluded (For Now) As part of its review, the CMA has developed a draft roadmap of changes Google could implement ahead of a final ruling. The regulator is paying close attention to the rise of generative AI, which is rapidly transforming the search landscape. While tools like AI Overviews are part of the proposed designation, Google’s Gemini assistant remains outside the scope—though the CMA intends to keep it under observation. Looking Ahead: Advertising and Mobile Ecosystems Also Under Scrutiny Further investigations are scheduled to begin in 2026, with the CMA planning to examine Google’s practices related to specialized search services and ad transparency. Additionally, the regulator is assessing Google and Apple’s dominance in the mobile operating system space, particularly with Android, signaling potential future actions. A Bold Move in Global Tech Regulation With the ability to enforce compliance through fines and direct action, the CMA’s latest move underscores its intent to challenge the market power of global tech giants. As scrutiny intensifies in both the US and EU, the UK joins a growing chorus of regulators looking to reshape digital competition without stifling technological innovation. sources ( Mashable )
xAI’s Grok May Soon Edit Spreadsheets, Leak Suggests
xAI’s Grok Could Soon Edit Spreadsheets, Leak Reveals ALSO SEE : OpenAI Ends Partnership with Scale AI After Meta Deal Advanced File Editing Capabilities in the Works A recent leak suggests that xAI, the AI startup founded by Elon Musk, is developing a powerful file editor for its AI assistant, Grok. The tool appears to include support for spreadsheets and signals a broader push by xAI to enter the productivity software space—potentially positioning Grok as a direct competitor to OpenAI, Microsoft, and Google. “You can talk to Grok and ask it to assist you at the same time you’re editing the files!” wrote reverse engineer Nima Owji, who uncovered and shared the leak. Building Toward Multimodal AI Workspaces While xAI has yet to formally outline its complete strategy for productivity tools, the company has made several moves indicating its ambitions. In April 2025, xAI launched Grok Studio, a split-screen interface that allows users to co-create documents, reports, code, and even browser games with Grok in real time. xAI also rolled out Workspaces, a feature designed to organize documents, files, and AI conversations in a unified environment—further blurring the line between traditional apps and AI collaboration. Taking Aim at Google and Microsoft Google’s Gemini Workspace currently offers the most similar functionality, allowing users to edit Docs and Sheets while chatting with Gemini. However, Gemini Workspace remains limited to Google’s ecosystem. xAI’s approach could provide a more flexible alternative, potentially integrating with a wider range of file types and services. OpenAI and Microsoft have also introduced tools that embed AI into document creation and collaboration, but Grok’s real-time conversational editing could set it apart—especially if it supports multiple file formats beyond spreadsheets. The “Everything App” Vision Expands If the leaked capabilities come to fruition, they would mark a significant step toward Elon Musk’s larger vision for X as an “everything app”—one that blends social media, payments, messaging, and productivity under a single umbrella. Whether this editor is the beginning of a full productivity suite remains to be seen, but the move underscores xAI’s determination to redefine how we interact with AI in daily workflows. Sources : ( Techcrunch )
OpenAI Ends Partnership with Scale AI After Meta Deal
OpenAI Cuts Ties with Scale AI Following Meta’s Billion-Dollar Partnership OpenAI Rethinks Data Partnerships Amid Industry Shifts In a major move that signals shifting dynamics in the AI development space, OpenAI has ended its partnership with data-labeling firm Scale AI. This decision comes just days after Meta announced a multi-billion-dollar investment in Scale AI, alongside plans to bring on its CEO, Alexandr Wang. Speaking to Bloomberg, an OpenAI spokesperson confirmed that the company had already started phasing out its reliance on Scale AI before Meta’s official announcement. OpenAI is now actively working with other providers to obtain more specialized and secure datasets—critical for training the next generation of advanced AI models. From Promising Collaboration to Sudden Departure This shift marks a stark contrast from earlier statements made by OpenAI’s Chief Financial Officer, Sarah Friar. She had previously suggested that the company would maintain its working relationship with Scale AI. However, the tone has since changed dramatically, as OpenAI now looks to build relationships with partners that are perceived as more neutral and better aligned with its evolving priorities. Meta Deal Triggers Industry-Wide Concerns Meta’s sudden partnership with Scale AI raised eyebrows across the tech world. Industry insiders view the collaboration—and Wang’s involvement in Meta’s broader AI plans—as a potential conflict of interest. As a result, AI developers and data-hungry companies are reconsidering their reliance on Scale AI. According to Reuters, Google is also exploring the possibility of severing its ties with Scale AI. This trend could signal a broader industry pivot away from the startup, driven by a desire to maintain data privacy and competitive distance from Meta’s growing influence. Competitors Seize the Moment As confidence in Scale AI wavers, rival data providers have reported a surge in interest from companies eager to partner with independent, non-aligned firms. These providers are positioning themselves as more transparent and neutral alternatives—qualities that are increasingly valuable in an AI arms race where proprietary data is a key differentiator. Scale AI Responds to Allegations and Doubts In an attempt to reassure its clients, Scale AI’s general counsel published a blog post on Wednesday. The post emphasized that Meta would not receive special treatment and that the company remains committed to protecting sensitive client data. It also stated that Alexandr Wang would not be involved in day-to-day operations, despite his deep ties to Meta’s new initiative. Still, those assurances appear to have had limited effect. The departure of major customers like OpenAI sends a strong message to the market that Scale AI may no longer be perceived as a fully neutral player in the competitive AI ecosystem. ALSO SEE : UAE ChatGPT Plus Free Offer? No Official Confirmation Yet A Strategic Pivot Toward Government and Enterprise Clients Facing increased scrutiny and client departures, Scale AI is now shifting its focus. In a separate blog post, interim CEO Jason Droege outlined a new direction for the company. He said Scale AI will “double down” on building custom AI applications—particularly for government agencies and large enterprises. This move suggests Scale AI is preparing to diversify its offerings and reduce its dependence on the data-labeling business, which is under pressure from both market forces and client trust issues. What’s Next? OpenAI’s decision could mark a turning point in how leading AI firms select their data partners. With major players now seeking neutrality, specialization, and stricter privacy assurances, the industry’s landscape is shifting rapidly. For Scale AI, the challenge lies in rebuilding trust and carving out a new role in a competitive and increasingly cautious AI ecosystem. Sources ( Techcrunch )
Google AI Mode Adds Voice Chat to Search Live
Google Introduces Voice Conversations to AI Mode in Search Hands-free, real-time search interactions have been made possible with Google’s new Search Live update ALSO SEE : Google Updates Gemini 2.5 Pro with Better Coding Skills Voice-Driven Search Experience Launched A new live voice interaction feature has been launched by Google, allowing users to have dynamic conversations with Search AI. This addition is part of the company’s efforts to make information retrieval more natural, intuitive, and accessible on the go. How It Works The update enables users to initiate a free-flowing, real-time voice dialogue with Google’s experimental AI Mode. By tapping the new “Live” icon in the Google app, spoken queries are responded to with AI-generated audio, and follow-up questions can be asked conversationally. Enhanced for Multitasking and Exploration The feature is expected to be particularly helpful for multitasking. For instance, while packing for a trip, users can ask something like “What are some tips for preventing a linen dress from wrinkling in a suitcase?” and follow up with, “What should I do if it still wrinkles?”—all without typing. Interactive Visual Support and Transcripts During conversations, relevant links are displayed on screen, allowing users to explore related web content. Additionally, a “transcript” button lets users view responses in text form or continue their questions by typing. Conversation history can also be revisited through the AI Mode history. Powered by Gemini and Google Search Systems According to Liza Ma, Director of Product Management at Google Search, the voice functionality is supported by a custom version of Gemini, built on Google’s industry-leading information systems. It also utilizes a “query fan-out” method, aimed at delivering broader, more diverse web content for deeper discovery. Works in the Background with Multitasking Support Search Live is designed to function seamlessly in the background. This means users can continue conversations with the AI while using other apps—making it well-suited for mobile users juggling multiple tasks. Upcoming Features: Camera Integration Google has announced plans to expand Live capabilities in the coming months. One major upcoming feature will allow users to ask questions based on what their phone’s camera sees in real time, a preview of which was shown at Google I/O in May. soucres : ( Techcrunch )
AI and the Future of Coding: Rise of Vibe Programming
The Vibe Shift in Coding: Is AI Coming for Engineers? By TechXNow -Team ( iteshpal ) | June 2025 On a high-resolution screen in Kirkland, Washington, four active terminals hum as artificial intelligence cranks out thousands of lines of code. Veteran software engineer Steve Yegge, formerly of Google and AWS, leans back and watches the show. “One’s running tests, one’s generating a plan—I’m technically working on four projects at once,” Yegge says. “But really, I’m just burning tokens.” Welcome to the era of vibe coding—the AI-driven approach to software development that’s flipping decades of assumptions about the future of programming. From Autocomplete to Autonomous When ChatGPT launched in late 2022, its coding capabilities felt like a productivity boost: helpful autocomplete, faster snippets. But in just a few short years, LLMs (large language models) have evolved into autonomous agents that can spin up entire apps, manipulate files, and even run tests—all from a few lines of human instruction. The term “vibe coding,” coined by AI researcher Andrej Karpathy in early 2024, captures this shift: software development through high-level prompting, not painstaking line-by-line construction. As capabilities grow, so do fears. Platforms like X and Bluesky are buzzing with speculation about companies cutting dev teams—or scrapping them entirely. Dario Amodei, CEO of Anthropic, told an audience at the Council on Foreign Relations earlier this year, “We’re not far from a world—maybe six months—where AI writes 90% of code. In a year, it could be nearly all of it.” Code Revolution or Coding Bubble? Not everyone is sold on the AI-coding takeover. Many in the software world argue that while AI can speed up routine tasks, it remains error-prone, unpredictable, and often insecure. The result? A paradox: coding has never been easier to start—but understanding it deeply may be more critical than ever. MIT economist David Autor compares the situation to transcription work—quickly automated by AI—but warns that complex software engineering won’t be so easily replaced. He also points to the elasticity of demand: if the need for software expands like the ride-hailing boom, we might see more code being written by more people, not fewer. “There may be an Uber effect on engineering,” Autor says. Vibe Coding Goes Mainstream Yegge—now leading AI-coding efforts at Sourcegraph—has become a vibe-coding evangelist. He’s even co-authoring a book, Vibe Coding, with veteran developer Gene Kim. Their prediction? AI-driven programming will be the default by year’s end. Startups like Cursor and Windsurf are already capitalizing on this movement, with Windsurf reportedly in talks to be acquired by OpenAI. Still, not all engineers are convinced. Many, including Ken Thompson of Anaconda, cite AI’s nondeterministic nature—generating different results from the same input—as a serious risk for real-world development. “Younger devs are jumping in. Older ones are cautious,” Thompson says. Martin Casado, a partner at Andreessen Horowitz and a board member at Cursor, acknowledges the transformation. “This is the biggest shift in software since we moved beyond assembly,” he says. “But AI is better at flash than precision.” The Catch: Quality Still Matters The promise of vibe coding doesn’t erase the pitfalls. Developers report AI introducing critical bugs, security vulnerabilities, and costly design flaws. Some even discover their AI-generated apps only simulate functionality, without actually delivering it. “You need to watch them like toddlers,” Yegge quips. A March WIRED survey reflected the divide: about 36% of developers are optimistic about AI coding tools, while 38% remain skeptical. Daniel Jackson, a computer science professor at MIT, worries about “mostly working” code creeping into production. “If you care about the software, you care that it works right,” he says. Jackson believes future software may evolve to suit AI better—with modular architectures, fewer dependencies, and constant testing. But he also sees danger in relying too heavily on generative tools: “We could end up with a generation of coders who can’t debug or secure what they build.” Coding Isn’t Dead—But It’s Evolving Despite concerns, AI isn’t wiping out engineering roles just yet. Companies like Honeycomb and Milestone say they’re seeing more demand for better engineers, not fewer. Christine Yen, CEO of Honeycomb, says AI helps with routine tasks, but developers still bring the critical thinking. “The hard part of building systems isn’t volume—it’s judgment,” she says. Naveen Rao, VP of AI at Databricks, notes a real shift in team sizes. “Where I once needed 50 engineers, now maybe I need 30. That’s a real change.” But coding skills, he argues, are still essential. “It’s like telling kids not to learn math.” Yegge and Kim agree. They see AI as a powerful tool, but one that still demands engineering discipline. Their advice? Build modularly, test constantly, and embrace experimentation—just don’t forget to check your cloud bill. As the AI coding wave accelerates, one thing is clear: knowing how to code is no longer just about syntax—it’s about navigating a rapidly changing landscape where human judgment and machine power must collaborate. sources ( Wired )
Instagram Adds Grid Edit, Quiet Posts & Spotify Notes
Instagram Gives Users More Control: Custom Grids, Quiet Posts, Spotify Notes, and New Creator Support Instagram is shaking things up. On Thursday, the social media giant rolled out a series of significant updates that reshape how users manage their profiles and share content. The headline feature gives users the long-awaited power to rearrange their profile grid, but the company isn’t stopping there. Instagram also began testing a quiet posting option, launched a new Spotify integration for Notes, and introduced a creator-focused initiative called “Drafts.” Together, these updates reflect Instagram’s evolving vision: making the platform more personal, expressive, and creator-friendly—without the usual pressure of performance. AlSO SEE : UAE ChatGPT Plus Free Offer? No Official Confirmation Yet Rearranging the Grid: A Top User Request Becomes Reality Instagram users have asked for more profile customization for years, and now the platform is delivering. With the new grid editing feature, anyone can now manually reorder the posts on their profile. Instead of being locked into the traditional chronological layout, users can create a more intentional and visually appealing grid. This change opens the door for countless possibilities. Creators, brands, and everyday users can now highlight favorite posts, group related content together, or simply craft a more polished aesthetic. Instagram’s head, Adam Mosseri, hinted back in January that the company planned to release this feature. In fact, Instagram started developing it in 2022, but the company paused its rollout at the time. Fast forward to 2025, and Instagram is finally putting the power directly into users’ hands. Quiet Posting: Share Without the Spotlight Instagram also started testing a new quiet posting feature designed to give users a low-pressure way to share content. Instead of broadcasting new posts to every follower’s feed, quiet posts go straight to the user’s profile. This approach allows people to build out their profiles and express themselves without the pressure of likes, comments, or instant visibility. Adam Mosseri explained the reasoning behind the test in a recent blog post. “Creative expression can feel intimidating, especially when posting something to feed,” he wrote. Quiet posting aims to reduce that anxiety, giving users more freedom to experiment and post casually. This option will likely appeal to everyday users more than public figures or influencers. While creators may continue to seek wide engagement, quiet posting gives casual users a way to focus on self-expression rather than performance metrics. Instagram already tested a similar concept with “trial reels,” which allow creators to post videos without alerting followers. The results have been encouraging. Instagram revealed that 40% of creators posted Reels more frequently after using the trial feature, and 80% of them reached more viewers outside their follower base. Spotify Integration Brings Music to Notes Instagram also launched a new music feature that connects directly with Spotify. With this update, users can share the song they’re currently listening to directly in their Instagram Notes. Friends and followers will see the track and get a sense of the user’s current vibe or music taste. This integration adds a new layer of personality to Notes and brings real-time audio expression into Instagram’s ever-evolving feature set. It’s another example of the company’s focus on personal storytelling and casual social connection. Introducing Drafts: A New Way to Support Emerging Creators Beyond new features, Instagram unveiled Drafts, a new initiative focused on supporting emerging talent in the creator community. Rather than launching a formal creator fund, Instagram is taking a more flexible approach. The Drafts program will offer customized support, which may include funding, mentorship, industry connections, co-creation opportunities, and more. An Instagram spokesperson shared additional details with TechCrunch, explaining that the platform will collaborate directly with creators to tailor support based on the specific needs of their projects. This individualized support model aims to empower creators in a way that feels authentic and sustainable, rather than one-size-fits-all. Instagram’s Bigger Vision With these announcements, Instagram is clearly shifting its priorities. The platform wants to help users feel more comfortable sharing, provide tools that allow for deeper self-expression, and create an environment where new voices can thrive without pressure. These updates—grid customization, quiet posting, music in Notes, and Drafts—reflect a broader trend in social media: users want control, creativity, and authenticity. Instagram is finally starting to listen, and it’s beginning to act. As the platform continues to evolve in 2025, users can expect even more tools that emphasize expression over performance, and community over competition. Sources ( Techcrunch )