Tag: Science & Technology

  • Dhruv64 is India’s first 64-bit 1GHz chip, will help make country self-reliant

    Dhruv64 is India’s first 64-bit 1GHz chip, will help make country self-reliant

    India has announced a major step in the development of indigenous semiconductors, with Dhruv64, its first indigenous 1.0 GHz, 64-bit dual-core microprocessor. This marks a notable step forward in the country’s efforts to become self-reliant in essential industries and defence technology. As the first of its kind developed within the country, Dhruv64 is positioned to strengthen India’s domestic semiconductor and processor pipeline, which has traditionally depended on foreign innovation.
    Dhruv64 was developed by the Centre for Development of Advanced Computing (C-DAC) under the Microprocessor Development Programme (MDP). The microprocessor’s debut arrives amid ongoing government initiatives, such as the Digital India RISC-V program, designed to promote the design, testing, and prototyping of homegrown chips. According to a press release by the Press Information Bureau (PIB), Dhruv64 is a direct result of these support mechanisms, which prioritise building indigenous technology to serve national needs.
    Dhruv64 is said to have both commercial and strategic use. The press release states that Dhruv64 will help reduce dependence on imported processors, particularly for India’s critical infrastructure sectors, especially in defence and high-performance computing. According to the PIB, India consumes 20 per cent of all microprocessors made globally. As such, this new processor provides a homegrown alternative for startups, academia, and industry.
    While India has had smaller-scale chip projects in the past, Dhruv64’s 64-bit architecture and 1.0 GHz dual-core design mark a significant leap in terms of power and applicability. This development could pave the way for more advanced, secure, and efficient hardware tailored to India’s unique requirements. The processor’s increased performance and modern architecture mean it can support a wider range of applications, from embedded systems to high-performance computing tasks, thereby expanding India’s technological horizons.
    Following Dhruv64’s successful development, the next generation of indigenous processors is already in progress. These include the Dhanush and Dhanush+ chips, which are currently under development and expected to further bolster India’s self-reliance in strategic technology sectors. The momentum generated by Dhruv64’s success is anticipated to inspire additional projects and investments in local chip design, further reinforcing India’s ambitions in this domain.

  • 4,000 global glaciers will disappear annually by 2055 as the planet warms

    How quickly the glaciers around the globe will disappear has been quantified by a research team. The approximate prediction of an alarming scenario was done by an international team led by ETH Zurich, the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), and Vrije Universiteit Brussel.
    The study, published in Nature Climate Change, highlighted that the Alps could reach their peak glacier loss rate between 2033 and 2041, with global peak loss occurring about a decade later.
    Depending on warming levels, the rate could rise from 2,000 to 4,000 glaciers vanishing annually at the global peak.
    The study revealed that the fate of glaciers is closely tied to how much the planet warms.
    If global temperatures rise by 2.7C, projections estimate only about 110 glaciers will remain in Central Europe by 2100, just 3 per cent of current Alpine glaciers. A 4C increase could reduce this number to around 20.
    Even medium-sized glaciers, like the Rhone Glacier, could shrink to remnants, and the Aletsch Glacier could fragment into smaller sections. Researchers have documented more than 1,000 glacier losses in Switzerland from 1973 to 2016.
    Regions with numerous small glaciers at low elevations or near the equator, such as the Alps, Caucasus, Rocky Mountains, Andes, and African mountains, are especially vulnerable.
    “In these regions, more than half of all glaciers are expected to vanish within the next ten to twenty years,” said Van Tricht, a researcher at ETH Zurich’s Chair of Glaciology and the WSL.
    Using three advanced glacier models and several climate scenarios, the researchers provided detailed forecasts for different mountain regions.
    In the Alps, a 1.5C scenario would leave about 430 of today’s 3,000 glaciers by 2100 (12 per cent), while a 2.0C rise leaves around 8 per cent (270 glaciers).
    At a 4C increase, only about 1 per cent of 20 glaciers survive.
    For comparison, the Rocky Mountains could retain 4,400 glaciers at a 1.5C rise; 25 per cent of today’s 18,000.
    The Andes and Central Asia would each lose over 90 per cent of their glaciers in the higher warming scenario. Globally, just 18,000 glaciers would remain at 4C, compared to about 100,000.

  • OpenAI updates its ChatGPT Images for better and faster performance

    OpenAI updates its ChatGPT Images for better and faster performance

    If your AI-generated images have ever taken a little too long to render, or come back with mysterious changes you definitely didn’t ask for, OpenAI thinks it has a fix. The company has announced a major update to ChatGPT Images that promises to make visual creation faster, sharper and far better at following instructions. In short: fewer surprises, more control, and a lot less waiting around. The update, revealed in a blog post on Tuesday, is part of a wider push inside OpenAI to transform ChatGPT from an impressive novelty into a genuinely practical everyday tool. And this time, the company says, the improvements are not subtle.
    According to OpenAI, the revamped image generator offers dramatically improved instruction-following, far more precise editing tools, and image generation speeds that are up to four times faster than before. That combination, it argues, fundamentally changes what users can realistically do with AI-generated visuals.
    “The update includes much stronger instruction following, highly precise editing, and up to 4x faster generation speed, making image creation and iteration much more usable,” the company wrote. “This marks a shift from novelty image generation to practical, high-fidelity visual creation, turning ChatGPT into a fast, flexible creative studio for everyday edits, expressive transformations, and real-world use.”
    One of the biggest frustrations with earlier versions of ChatGPT Images was their tendency to overstep. Ask for a small tweak, and the system might quietly rework half the image. OpenAI says that problem has been a key focus of this update.
    The company claims the new version is much better at making targeted changes without altering other elements, allowing users to refine images rather than repeatedly start from scratch. For designers, marketers and casual creators alike, this could mean smoother workflows and far less trial and error.
    Speed is the other headline change. With generation times now reportedly up to four times faster, OpenAI is positioning ChatGPT Images as something you can iterate with in real time, rather than a tool that interrupts creative momentum.
    The timing of the update is no coincidence. It arrives just weeks after OpenAI chief executive Sam Altman reportedly issued what he described as a “code red” memo inside the company, calling for urgent improvements to ChatGPT’s overall quality.
    In that internal document, Altman said OpenAI still had significant work to do to improve the chatbot’s day-to-day experience, including answering a broader range of questions and boosting speed, reliability and personalisation. The memo was first reported by The Wall Street Journal.
    The urgency reflects growing pressure from rivals. Competitors have been closing the gap on OpenAI’s early lead, with Google last month releasing a new version of its Gemini model that outperformed OpenAI on several industry benchmark tests.

  • December 12 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F12%2FTIP-December-12-E-Edition.pdf”][vc_single_image image=”255441″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/12/TIP-December-12-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][/vc_column][/vc_row]

  • Apple iPhone Fold to shake up foldable market in 2026 with huge share and sky high price

    Apple iPhone Fold to shake up foldable market in 2026 with huge share and sky high price

    Apple is expected to launch its first-ever foldable iPhone in 2026. After years of waiting, the Cupertino giant will likely finally join the foldable race against the likes of Samsung and Google. Now, reports suggest that this device can put Apple at the forefront of the foldable segment, with a market share as high as 34 per cent.
    According to International Data Corporation (IDC), the iPhone Fold is expected to capture over 22 per cent of unit sales and 34 per cent of the total market value in its debut year. These estimates come despite the expected price tag of $2,400 (roughly Rs 2,15,000) for the Apple foldable. For context, the Samsung Galaxy Z Fold 7 is priced at Rs 1,74,999 in India.
    Apple’s entry in the foldable smartphone segment is seen as a major development that could accelerate growth and competition among established brands. The global foldable smartphone market is forecasted to grow by 30 per cent year-over-year in 2026, a sharp increase from the previous 6 per cent projection.
    Multiple high-profile launches are expected to drive this growth, including Samsung’s upcoming Galaxy Z Trifold and new models from Huawei. However, Apple’s arrival at the end of the year is cited as a ‘game changer’ in the foldable category.
    The iPhone Fold is expected to debut in fall 2026, likely alongside the iPhone 18 Pro lineup. The first-ever Apple foldable is tipped to feature a book-like, inward-folding design with a 5.5-inch exterior display and a 7.8-inch interior display
    Rumoured features for the iPhone Fold also include an under-display selfie camera and a crease-minimising hinge. The device is expected to be powered by the upcoming A20 Pro chipset, the same as the iPhone 18 Pro models. Apple is also rumoured to pack at least a 5,400mAh battery. If true, this would be larger than most foldables, including the Galaxy Z Fold 7 and the Google Pixel 10 Pro Fold.

  • Meta updates Facebook so you can see more reels, here’s what is changing

    Meta updates Facebook so you can see more reels, here’s what is changing

    Meta is changing the way your Facebook feed works, focusing on updated design elements and improved access to Reels and other popular features. The company hopes to make it easier for users to see content they are interested in.
    The update, which will soon reach users globally, centres on a more immersive feed, streamlined navigation bars. According to Meta, these changes are intended to make the platform easier to use while highlighting trending content.
    The overhaul will affect several core areas of the app, including search, navigation, feed, and content creation tools. Now, you will see a grid layout for search results and for posts containing multiple photos. The company states that this will make it easier to browse through the different images.
    Meta emphasised its goal to streamline the Facebook experience, with particular attention to visual content. Along with Reels, the platform aims to surface more relevant and engaging media within users’ feeds.
    Reels will now be easier to find and watch, thanks to their dedicated tab in the lower navigation bar. Changes to the navigation layout mean the most-used features—including Reels, Friends, Marketplace, and Profile—are immediately accessible.
    Additional refinements have been made to the menu and notification tabs, which Meta describes as “refreshed.” Users will also be able to access Meta AI tools and quickly view Stories from friends.
    Facebook is also experimenting with new ways for users to engage with content. Search results will soon display as an immersive grid, and Meta confirmed that it is testing a new “full-screen viewer” to allow users to see photo and video results too.
    The update also includes enhancements to Facebook’s discovery algorithm, aiming to recommend friends and content based on shared interests, such as music, travel, or favourite shows. Users can choose which interests appear on their profiles and provide feedback on what content they prefer to see, enhancing control and personalisation in their feed.
    Content creation has also received a usability boost, with key tools like adding music or tagging friends now easier to access. The changes are designed to help users create and share posts with minimal friction, reflecting ongoing efforts to keep Facebook both familiar and up to date.

  • James Webb telescope breaks own record, spots oldest star explosion ever known

    The James Webb Space Telescope (JWST)’s recent results have once again shifted the boundaries of what astronomers can observe. Scientists now report identifying the earliest supernova ever found, erupting just 730 million years after the universe’s birth, approximately five percent into cosmic history. This milestone not only redefines astronomical records but clarifies how the universe’s first stars and supermassive black holes came to be.
    On March 14, 2025, the French–Chinese Space-based multi-band astronomical Variable Objects Monitor picked up a gamma-ray burst: GRB250314A. Within 24 hours, telescopes in the Canary Islands and Chile, alongside NASA’s Neil Gehrels Swift Observatory, zeroed in on the event.
    This event overtakes the space telescope’s previous best: a supernova seen from 1.8 billion years post Big Bang.
    Andrew Levan, the main study’s lead author, stated, “With JWST, we were able to immediately demonstrate that this light comes from a supernova—a collapsing massive star. This observation also shows that we can use this space telescope to identify individual stars from a time when the universe was only about 5 per cent of its current age, roughly 0.7 billion years old.”
    Despite its immense age, the supernova displays strong similarities to those found in nearby cosmic neighbourhoods. However, the research team emphasises the need for expanded datasets to discern any nuanced differences. These forthcoming insights may clarify what transpired within the universe’s first billion years.
    JWST has also pinpointed the supernova’s host galaxy, GS 3073. This galaxy holds a nitrogen-to-oxygen ratio measured at 0.46, a value that far exceeds what usual stellar activity produces. Such a ratio proved pivotal.
    JWST has managed to capture light from GS 3073, even from such a remote distance. In JWST’s imagery, the galaxy emerges as a dim red speck, squeezed into only a handful of pixels.
    The collaborating international group has now secured approval to use JWST for further observations of additional afterglows from related gamma-ray bursts.

  • December 5 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F12%2FTIP-December-5-E-Edition-1.pdf”][vc_single_image image=”252995″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/12/TIP-December-5-E-Edition-1.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][/vc_column][/vc_row]

  • Is this comet flying backwards? Nasa spacecraft captures comet 2025 R2 (Swan)

    Is this comet flying backwards? Nasa spacecraft captures comet 2025 R2 (Swan)

    Nasa’s Polarimeter to Unify the Corona and Heliosphere (Punch) mission has provided a detailed look at Comet 2025 R2 (Swan), tracking its movement from August to early October 2025. The spacecraft collected images every few minutes, offering one of the most frequent observation cadences ever achieved for a comet.
    The comet, first identified on September 11 by Ukrainian amateur astronomer Vladimir Bezugly in SOHO spacecraft data, was later found in Punch records dating back to August 7, as it journeyed near Mars and the star Spica. Processed videos created from Punch data display Comet Swan moving leftward, away from the Sun, which gives the visual impression that the comet is moving backwards.
    This illusion occurs because the solar wind, a continuous stream of charged particles emanating from the Sun, pushes the comet’s tail in the same direction as its orbital motion, reversing the expected appearance.
    Punch’s four satellites closely monitored the dynamic changes in the comet’s tail, including its growth, shrinkage, and flickering. These multi-vantage observations enabled researchers to better understand how the solar wind interacts with cometary material.
    The SOHO mission, a joint initiative between Nasa and ESA, celebrates its 30th launch anniversary in December 2025. SOHO’s Swan instrument has contributed to the discovery of over 5,000 comets, with many, such as Comet Swan, identified by citizen scientists like Bezugly.
    Comet Swan reached its closest point to the Sun, or perihelion, on September 12 at a distance of 0.5 astronomical units. The comet brightened rapidly, increasing from magnitude 11 to 8, and peaked at magnitude 6 during its closest approach to Earth on October 20, when it was just 0.26 astronomical units away. A possible fragmentation event was detected in early November, suggesting shifts in its physical structure.
    Punch’s high-frequency imaging, capturing the comet every four minutes through October 5, surpassed the traditional daily observation schedule. Principal investigator Craig DeForest emphasized the advantages of this approach, noting its value in examining the interactions between the solar wind and various objects in the solar system.
    Gina DiBraccio, a heliophysicist at Nasa Goddard, commented on the benefits of the multi-vantage observations: “Watching the Sun’s effects from multiple vantage points… gives us a complete picture of the space environment,” she said. These findings help improve models of how solar wind can affect comets, planets, astronauts, and technological systems on Earth.

  • OpenAI working on a secret Garlic AI model to challenge Google Gemini 3 and Opus 4.5 in coding and reasoning

    OpenAI working on a secret Garlic AI model to challenge Google Gemini 3 and Opus 4.5 in coding and reasoning

    The race to build the most powerful artificial intelligence system has entered a new phase, with Microsoft-backed OpenAI quietly developing a large language model called Garlic. The model is being designed to rival Google’s Gemini 3 and Anthropic’s Opus 4.5, particularly in advanced reasoning and coding abilities, according to a report by The Information. Early internal tests suggest Garlic is performing strongly and could debut as GPT-5.2 or GPT-5.5 by early next year.
    The Garlic project comes amid growing competition following Google’s success with Gemini 3. According to The Information, OpenAI’s Chief Research Officer, Mark Chen, told colleagues that Garlic had shown “strong performance” across multiple benchmarks, including reasoning and programming, where Google and Anthropic currently hold an edge.
    CEO Sam Altman has reportedly declared a “code red” inside the company to improve ChatGPT and reclaim OpenAI’s lead in the AI race. He told staff that OpenAI’s new reasoning model was already “ahead” of Gemini 3 in its own internal evaluations. Although OpenAI has not commented publicly, insiders say the company is fast-tracking Garlic’s release, aiming for an early 2026 rollout.
    Garlic reportedly builds on lessons from Shallotpeat, an earlier in-house model that Altman mentioned to employees in October. While Shallotpeat was designed to challenge Gemini 3, Garlic incorporates bug fixes and refinements from that project, particularly in the pretraining phase. This stage teaches a model to recognise relationships in massive datasets drawn from across the internet.
    According to Chen, Garlic represents a leap forward in pretraining efficiency. He told colleagues that the team had managed to “infuse a smaller model with the same amount of knowledge” that previously required a much larger one. This advancement means Garlic could deliver GPT-4.5-level performance at lower cost and faster speed.
    The breakthrough comes as Google has been touting similar improvements with Gemini 3’s training process. OpenAI’s progress with Garlic could counterbalance that advantage and potentially give the company a more efficient path to future upgrades.
    Chen said Garlic had already surpassed OpenAI’s “previous best” pretraining results and resolved key technical bottlenecks that affected GPT-4.5, which launched earlier this year. With these improvements, OpenAI is confident it can now develop smaller yet more capable models without inflating training costs.
    Before Garlic launches, it will undergo post-training with specialised datasets, along with safety testing and evaluation. Sources also claim the success of Garlic has already allowed OpenAI to begin working on an even more advanced successor model, building on the lessons learned during its development.

  • Putting Pompeii’s pieces together, with the help of a robot

    Pompeii’s ancient Roman frescoes, shattered and buried for centuries, could get a second life thanks to a pioneering robotic system designed to support archaeologists in one of their most painstaking tasks: reassembling fragmented artefacts. The technology, developed under an EU-funded project called RePAIR, combines advanced image recognition, AI-driven puzzle-solving, and ultra-precise robotic hands to accelerate traditionally slow and often frustrating restoration work.
    Launched in 2021 and coordinated by Venice’s Ca’ Foscari University, the robotic project showcased in Pompeii on Thursday brought together international research teams that have used the archaeological site as their testing ground. The experimental project “actually started from a very concrete necessity to recompose fragments of frescoes that had been destroyed during the Second World War,” said the site’s director Gabriel Zuchtriegel.
    Researchers believe the technology could transform restoration practices worldwide.
    The robot uses twin arms equipped with flexible hands in two sizes and vision sensors to identify, grip and assemble fragments without damaging their delicate surfaces.
    The once-thriving city of Pompeii, near Naples, and its surrounding countryside were submerged by volcanic ash when Mount Vesuvius exploded in AD 79.
    Researchers focused on frescoes preserved in a fragmentary state in Pompeii’s storerooms — two large ceiling paintings which were damaged during the initial eruption and later shattered by bombing in World War Two, and frescoes from the so-called ‘House of the Gladiators’ which collapsed in 2010.
    Replicas were created during this initial testing phase to avoid risking the original pieces. While the robotics teams worked on designing and building the system, experts in artificial intelligence and machine learning developed algorithms to reconstruct the frescoes, matching colours and patterns that may not be visible to the human eye. Experts say the task is similar to solving a giant jigsaw puzzle, with extra difficulties such as missing pieces and no reference image of the final result.
    “It’s like you buy four or five boxes of jigsaw puzzles. You mix everything together, then you throw away the boxes and try to solve four or five puzzles at the same time,” said Marcello Pelillo, the Venice university professor who coordinated the project.

  • November 28 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F11%2FTIP-November-28-E-Edition.pdf”][vc_single_image image=”247014″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/11/TIP-November-28-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][/vc_column][/vc_row]

  • ChatGPT Voice mode now built into main Interface

    ChatGPT Voice mode now built into main Interface

    Voice chat is quite useful with AI. You can get answers to your queries without having to type long sentences or prompts. However, if you wanted to use ChatGPT Voice, things were a bit tedious. The chatbot only worked with voice mode in a separate interface from your normal chat. Now, OpenAI has introduced a major update for ChatGPT by integrating Voice mode directly into the main chat interface.
    OpenAI confirmed that the feature is being rolled out to all users. It requires users to update their ChatGPT app to access the integrated voice capabilities. It is available to all smartphone and web users.
    Now, users can talk to ChatGPT through Voice in the main chat itself, without the need to switch to a separate screen. Previously, ChatGPT’s voice mode required users to leave the main chat and enter a dedicated voice interface, which included limited options such as listening to responses and managing mute or video settings.
    The update allows for conversations to flow more naturally, allowing users to speak, view responses, and interact with visuals all within a single window.
    The integrated Voice mode supports everything that we have grown accustomed to with ChatGPT. With this update, you will be able to see images and maps as responses are generated. This is expected to streamline workflow and make interactions feel more seamless.
    OpenAI wrote on X, “You can talk, watch answers appear, review earlier messages and see visuals like images or maps in real time.”
    For those who prefer the older interface, OpenAI has included the option to revert to the separate voice mode. Users can do this by going into Settings, selecting Voice Mode, and turning on “Separate mode.” This restores the classic interface and layout for those who want to keep their voice and text interactions distinct.

  • Microsoft to remove Copilot from WhatsApp on Jan 15 as new rules block AI chatbots

    Microsoft to remove Copilot from WhatsApp on Jan 15 as new rules block AI chatbots

    If you’ve been chatting with Microsoft Copilot on WhatsApp, your days with the AI assistant are numbered. Microsoft has confirmed that Copilot will stop working on WhatsApp from January 15, following new policy changes introduced by Meta, the parent company of WhatsApp. After that date, users will only be able to access Copilot through Microsoft’s standalone mobile apps or via the web version.
    The move comes as WhatsApp enforces new rules aimed at limiting general-purpose AI chatbots on its platform. The company wants to ensure that its business tools focus on customer service and commerce workflows rather than hosting large-scale AI assistants created by tech giants.
    Meta first announced its updated platform policy in November, confirming that the WhatsApp Business API would no longer support general AI chatbots used for mass engagement. The goal, according to Meta, is to free up infrastructure for businesses that rely on the API for real-time customer interaction and automated services. This means AI systems such as Microsoft Copilot, OpenAI’s ChatGPT, and Perplexity AI can no longer operate directly through WhatsApp. Instead, only bots designed for specific customer service functions will remain supported.
    Microsoft’s decision to withdraw Copilot from the messaging app follows a similar move by OpenAI, which has already announced that its own WhatsApp integration will end in January. The change is part of a broader shift in how major platforms manage AI access, ensuring that third-party tools do not overload their systems or blur the line between customer assistance and general conversation.
    The biggest downside for users is that Copilot chat history on WhatsApp will not transfer to Microsoft’s other platforms. Because Copilot’s WhatsApp sessions were unauthenticated, they were not linked to users’ Microsoft accounts. As a result, once the integration shuts down, all chat data will be permanently deleted.
    Microsoft has advised users to export their Copilot conversations manually using WhatsApp’s built-in tools before January 15. After that deadline, Copilot will simply stop replying within the app.
    To continue using the assistant, users can switch to the Copilot mobile app on Android or iOS or access it through the official Copilot website. Both versions offer enhanced functionality, including document summarisation, creative writing tools, coding help, and image generation.
    This development highlights the ongoing transformation of the AI chatbot ecosystem, as platforms like WhatsApp draw clearer boundaries between entertainment-focused AI and business automation. For Microsoft, the shift may prove beneficial, driving users toward its dedicated ecosystem where Copilot is deeply integrated with Windows, Office, and Edge.

  • NASA’s Perseverance rover detects ‘mini-lightning’ on Mars

    NASA’s Perseverance rover has obtained evidence that the atmosphere of Mars is electrically active, detecting electrical discharges — what one scientist called “mini-lightning” — often associated with whirlwinds called ‘dust devils’ that regularly saunter over the planet’s surface.
    The six-wheeled rover, exploring Mars since 2021 at a locale called Jezero Crater in its northern hemisphere, picked up these electrical discharges in audio and electromagnetic recordings made by its SuperCam remote-sensing instrument, researchers said.
    It is the first documentation of electrical activity in the thin Martian atmosphere.
    “These discharges represent a major discovery, with direct implications for Martian atmospheric chemistry, climate, habitability and the future of robotic and human exploration,” said planetary scientist Baptiste Chide of the Institute for Research in Astrophysics and Planetology in France, lead author of the study published on Wednesday in the journal Nature.
    “The electrical charges required for these discharges are likely to influence dust transport on Mars, a process fundamental to the planet’s climate and one that remains poorly understood. What’s more, these electrostatic discharges could pose a risk to the electronic equipment of current robotic missions – and even a hazard for astronauts who one day will explore the Red Planet,” Chide said.
    The researchers analysed 28 hours of microphone recordings made by the rover over a span of two Martian years, detecting 55 electrical discharges, usually associated with dust devils and dust storm fronts. “We did not detect lightning by the common definition. It was a small spark, perhaps a few millimeters long, not really lightning. It sounded like a spark or whip-crack,” said planetary scientist and study co-author Ralph Lorenz of the Johns Hopkins University Applied Physics Laboratory in Maryland.
    Sixteen of the electrical discharges were recorded during Perseverance’s two close encounters with dust devils.
    Another study published in October documented how dust devils are a common feature on dry and dusty Mars, with two orbiting spacecraft detecting wind speeds reaching around 98 miles per hour (158 kph) in these whirlwinds that hoist dust into the atmosphere.

  • November 21 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F11%2FTIP-November-21-E-Edition.pdf”][vc_single_image image=”244838″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/11/TIP-November-21-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][/vc_column][/vc_row]

  • US sanctions Indian entities, individuals involved in Iranian petroleum sales

    US sanctions Indian entities, individuals involved in Iranian petroleum sales

    New York/Washington (TIP)- The Trump administration has sanctioned entities and individuals from India involved in sales of Iran’s petroleum and petroleum products, saying the funds from this trade support Tehran’s regional terrorist proxies and procure weapons systems that are “a direct threat” to the US.
    The Departments of State and Treasury sanctioned shipping networks responsible for funding the Iranian regime’s “malign activities” through illicit oil sales, as well as an airline and its affiliates that arm and supply Iran-backed terrorist groups.
    Among those added to the Treasury Department’s Office of Foreign Assets Control Specially Designated Nationals List are Indian nationals Zair Husain Iqbal Husain Sayed, Zulfikar Hussain Rizvi Sayed, Maharashtra-based RN Ship Management Private Limited and Pune-based TR6 Petro India LLP.
    The State Department is designating 17 entities, individuals, and vessels in several countries, including, but not limited to India, Panama, and the Seychelles, involved in Iran’s petroleum and petroleum products sales, the administration said.
    Concurrently, the Department of the Treasury is designating 41 entities, individuals, vessels, and aircraft, intensifying its efforts against Iran’s petroleum and petrochemical exports and disrupting financial streams and commercial operatives that support Iran’s malign activities.
    The funds generated by this oil trade are used to support Iran’s regional terrorist proxies and procure weapons systems that pose a direct threat to US forces and American allies, the State Department said on Thursday.
    The administration said TR6 Petro is an India-based petroleum products trader, which between October 2024 and June 2025, imported over USD 8 million worth of Iranian-origin bitumen from multiple companies.
    It is being designated for knowingly engaging in a significant transaction for the purchase, acquisition, sale, transport, or marketing of petroleum or petroleum products from Iran, the State Department said.
    The Iranian regime continues to fuel conflict in the Middle East to fund its destabilising activities, the State Department said. This behaviour enables Iran to fund its nuclear escalations, support terrorist groups, and disrupt the flow of trade and freedom of navigation in waterways that are crucial to global prosperity and economic growth.
    The United States will continue to act against the network of maritime service providers, dark fleet operators, and petroleum products traders involved in the transport of Iranian crude oil and petroleum products, it said.
    The State Department said the US will continue to act in support of National Security Presidential Memorandum 2 (NSPM-2), which directs the imposition of maximum pressure on the Iranian regime to deny it access to resources needed to sustain its destabilising activities.
    “The United States remains committed to disrupting the illicit funding streams that finance all aspects of Iran’s malign activities. As long as Iran devotes revenue to funding attacks against the United States and our allies, supporting terrorism around the world, and pursuing other destabilising actions, we will use all the tools at our disposal to hold the regime accountable,” it said.
    The action is being taken pursuant to counter-terrorism provisions, which target Iran’s petroleum and petrochemical sector.

  • Apple will soon let you replace Siri in your iPhone

    Apple will soon let you replace Siri in your iPhone

    In the past year, AI voice assistants have become much more advanced and capable. The likes of Google Gemini and OpenAI’s ChatGPT can handle complex queries and provide information to users. However, Apple’s Siri is seemingly falling behind in this race. Now, the Cupertino giant is planning to give iPhone users the choice to use third-party voice assistants.
    In iOS 26.2 beta 3, Apple has confirmed that iPhone users will soon be able to replace Siri with third-party voice assistants. The company’s official developer documentation reveals that a user will be able to activate third-party assistants via the Side Button on their iPhone.
    Apple’s Siri replacement feature limited to Japan
    Unfortunately, Apple states that the feature will only be available to users in Japan. Apple’s developer notes state, “In Japan, people might place an action on the side button of iPhone that instantly launches your voice-based conversational app.” However, the choice for consumers will also depend on whether developers have updated their apps to work with the new system.
    This will only function if the user’s Apple Account region is set to Japan and the device is physically located in the country.
    Apple’s decision to support third-party assistants in Japan is likely done in compliance with the Mobile Software Competition Act Guidelines established by the Japan Fair Trade Commission. Apple’s approach is believed to be a response to local regulatory requirements, rather than a global policy shift at this stage.
    While some may be disappointed to not get this feature outside Japan, Apple is planning for a big overhaul to Siri soon. The Cupertino giant will reportedly pay $1 billion to Google to get a Gemini-based Siri next year. Apple will also ensure that this model runs solely on its own servers, and no user data is accessed by Google or any third party.

  • What is Nano Banana 2 trend? Google’s upgraded AI-image tool is taking over the internet

    What is Nano Banana 2 trend? Google’s upgraded AI-image tool is taking over the internet

    Get ready for a much awaited update from the fruit-bowl of AI. Nano Banana 2 aka Nano Banana Pro, is the latest image-generation gadget from Google DeepMind that’s stirring up the internet. After the viral success of the original Nano Banana model, Google has now unveiled a more polished, powerful version built on the Gemini 3 Pro architecture. With studio-quality visuals, multilingual text rendering and search-grounded world-knowledge baked in, Nano Banana 2 seems to be aiming for the crown of every creator’s toolbox.
    What is Nano Banana 2?
    Essentially, Nano Banana 2 (Pro) is the next-generation image generation and editing model. It supports high-fidelity outputs (2K and 4K resolution), advanced editing controls (lighting, focus, colour grading), and best of all, crisp, legible text embedded in the image. What this means in practice: a user can prompt the model to generate an infographic about cardamom tea, complete with real-world facts, step-by-step visuals, and multi-language text. The model taps into Google Search for world-knowledge grounding, allowing it to incorporate real-time data (for example, weather or sports) when generating visuals.
    Nano Banana 2 is being rolled out across several platforms, in the Gemini app, in Google Ads, in Google Workspace and in the enterprise API. Google says the version also comes with built-in SynthID watermarks to flag the image as AI-generated.
    Difference between Nano Banana 2 and Nano Banana 1
    While Nano Banana 1 (officially Nano Banana) wowed audiences with its rapid image-edit capabilities and meme-worthy figurines, Nano Banana 2 (Pro) represents a more refined leap. The key upgrades include supporting 4K resolution and studio-grade control over lighting, depth and composition.
    Unlike the earlier version where text in images often looked odd or garbled, Nano Banana 2 handles typography and multilingual text with professionalism. The upgraded model integrates Google Search to generate images with real-world context (e.g. up-to-date data), Something the first version lacked.
    With multi-image inputs and reference style control (up to 14 images), the “Pro” version caters better to branding and professional workflows. In short, Nano Banana 1 was expressive and fun. Nano Banana 2 (Pro) is expressive, and professional-ready.
    But it’s not all bananas and sunshine. The rise of this image-tool brings several red flags that may be used for cyber crime. With high-fidelity visual generation, the potential to create realistic yet fake images increases. That means more misinformation, impersonation and brand misuse.
    When you upload images or prompts, how is your data used? Google’s systems may use inputs for training unless you opt out, raising questions about consent and privacy.

  • November 14 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F11%2FTIP-November-14-E-Edition.pdf”][vc_single_image image=”239419″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/11/TIP-November-14-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][/vc_column][/vc_row]

  • Google unveils Private AI Compute, new cloud tech promises smarter AI without spying on you

    Google unveils Private AI Compute, new cloud tech promises smarter AI without spying on you

    Google has introduced Private AI Compute, a new cloud platform designed to make its AI models smarter without compromising user privacy. The company says the technology bridges the gap between on-device security and cloud-level intelligence. This means that users will gain access to advanced AI processing without sacrificing control over their data. This launch is a big move by Google in the growing race to make AI more private and transparent, especially as companies like Apple push similar initiatives with their own cloud systems.
    The new platform essentially allows devices to tap into Google’s most powerful Gemini AI models for tasks that require heavy computation like summarising recordings, generating contextual suggestions, or managing smart features while ensuring that sensitive data stays private. Google says the system is designed so that no one, not even the company itself, can access what users share or process through it.
    How does it work?
    Explaining how it works, Jay Yagnik, Google’s Vice President of AI Innovation and Research, said that Private AI Compute runs inside a secure cloud environment built on Google’s custom Tensor Processing Units (TPUs). Using encryption and remote attestation, user devices connect to this isolated environment to process data safely. The technology is further protected by what Google calls Titanium Intelligence Enclaves (TIE), ensuring that no external entity, including Google’s own engineers or advertisers, can peek into user data.
    For years, many of Google’s AI-driven features, such as translation, audio summaries, and voice assistants, have been processed directly on devices like Pixel phones and Chromebooks. This approach prioritised privacy but came with a trade-off: limited computational power. As AI models become more advanced, Google says it’s no longer practical to run everything on-device. Private AI Compute solves this by offloading heavier tasks to a secure cloud, effectively combining the best of both worlds.
    This move opens new possibilities for Google’s ecosystem of devices and apps. For instance, future Pixel phones like the Pixel 10 will use Private AI Compute to improve Magic Cue, a tool that pulls relevant information from apps like Gmail and Calendar to make context-aware suggestions. The Recorder app is also set to support more languages for real-time transcription and summarisation. “This is just the beginning,” Google said, hinting that more AI-powered experiences are on the way.

  • Nasa cancels Escapade Mars mission launch due to powerful solar storm

    NASA has announced the postponement of its planned Mars mission launch due to an intense solar storm, citing safety concerns for the spacecraft and mission success. The launch, originally set to lift off aboard Blue Origin’s New Glenn rocket, has been delayed until space weather conditions stabilise.
    NASA and Blue Origin issued a joint update confirming that the agencies are temporarily holding off the liftoff of the twin ESCAPADE (Escape and Plasma Acceleration and Dynamics Explorers) spacecraft.
    These small satellites, part of NASA’s Heliophysics Division, are designed to study the magnetic environment of Mars and how it interacts with the solar wind.
    “New Glenn is ready to launch. However, due to highly elevated solar activity and its potential effects on the ESCAPADE spacecraft, NASA is postponing launch until space weather conditions improve,” Blue Origin stated in its update.
    The teams are now reassessing upcoming launch opportunities in coordination with range authorities and space weather forecasters.
    The decision comes amid one of the most active solar periods of the current solar cycle, with a surge of solar flares and coronal mass ejections detected over the past week.
    Such solar activity can generate intense radiation and charged particles, posing a risk to spacecraft electronics and altering launch trajectories.
    ESCAPADE’s mission aims to unravel how the solar wind affects Mars’ thin atmosphere, a question tied closely to the planet’s long-term habitability.
    The small twin probes will orbit Mars to collect data about plasma processes and the dynamics of its magnetotail.
    Postponing the mission points to NASA’s caution in protecting advanced instruments from unpredictable solar conditions.
    While specific rescheduling details were not announced, officials indicated that the next launch window will be contingent on improved space weather forecasts and range availability.
    For now, New Glenn remains on standby, with both NASA and Blue Origin emphasising that mission safety will take priority over schedule.

  • Apple Plans AI Comeback With Tabletop Robots, Life-Like Version Of Siri

    Apple Plans AI Comeback With Tabletop Robots, Life-Like Version Of Siri

    Apple Inc is plotting its artificial intelligence comeback with an ambitious slate of new devices, including robots, a lifelike version of Siri, a smart speaker with a display, and home-security cameras.
    A tabletop robot that serves as a virtual companion, targeted for 2027, is the centerpiece of the AI strategy, according to people with knowledge of the matter. The smart speaker with a display, meanwhile, is slated to arrive next year, part of a push into entry-level smart-home products.
    Home security is seen as another big growth opportunity. New cameras will anchor an Apple security system that can automate household functions. The approach should help make Apple’s product ecosystem stickier with consumers, said the people, who asked not to be identified because the initiatives haven’t been announced.
    Apple shares climbed to a session high on Wednesday after Bloomberg News reported on the plans. The stock was up nearly 2% to $233.70 as of 2:17 pm in New York.
    Tim Cook is banking on an ambitious product road map to help get the company’s AI effort on track.
    It’s all part of an effort to restore Apple’s mojo. Its most recent moon-shot project, the Vision Pro headset, remains a sales flop, and the design of its bestselling devices has remained largely unchanged for years.
    At the same time, Apple has come under fire for missing the generative AI revolution. And OpenAI may even threaten the company’s home turf by developing new AI-driven devices with the help of former Apple design chief Jony Ive.
    Though Apple is still in the early stages of turning around its AI software, executives see the pipeline of hardware as a key piece of its resurgence – helping it challenge Samsung Electronics Co, Meta Platforms Inc, and others in new categories.
    A spokesperson for Cupertino, California-based Apple, declined to comment. Because the products haven’t been announced, the company’s plans could still change or be scrapped. Many of the initiatives and their timelines rely on Apple’s continued progress in AI-powered software.

  • November 7 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F11%2FTIP-November-7-E-Edition.pdf”][vc_single_image image=”237494″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/11/TIP-November-7-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][/vc_column][/vc_row]

  • Google may soon phase out Assistant, making Gemini the only option

    Google may soon phase out Assistant, making Gemini the only option

    For nearly a decade, Google Assistant has been the ever-present digital companion of Android users – setting alarms, answering trivia, and helping millions navigate daily life with a simple “Hey Google.” But that familiar voice may soon fade into history. Reports suggest that Google is preparing to phase out its iconic Assistant in favor of Gemini, its next-generation, generative AI-powered model that represents the company’s boldest reimagining of personal assistance yet. This transition marks not just the end of an app, but the beginning of a new technological philosophy – one where assistants evolve from reactive helpers to intelligent collaborators.
    From Commands to Conversations: The Rise of Gemini
    When Google Assistant debuted in 2016, it was revolutionary. It could parse natural language, respond contextually, and integrate seamlessly with Google’s apps. But the technology landscape has changed dramatically since then. The rise of large language models (LLMs) – capable of understanding nuance, reasoning through context, and even generating creative responses – has reshaped what users expect from digital assistants. Today, people want more than reminders and weather updates; they want conversation, analysis, and creativity.
    Enter Gemini, Google’s multimodal AI platform designed to handle text, images, and soon even video. It can write essays, explain code, summarize meetings, or analyze a photo – all within the same conversation. With Gemini, Google envisions a single AI interface that integrates across every corner of its ecosystem, from Gmail to Docs, Maps to Android Auto.
    “Generative AI is transforming the way we interact with technology,” Google noted earlier this year. The message is clear: Gemini is not just an upgrade – it’s the future.
    Why Google Is Retiring Its Old Assistant
    While it might sound abrupt, the decision to replace Google Assistant with Gemini stems from both technological necessity and strategic foresight.
    Evolution of AI: Assistant was built for an earlier era of natural-language processing, where responses were scripted and tasks pre-defined. Gemini’s large-language model architecture allows far greater flexibility – it can think, reason, and generate on the fly.
    Unified Ecosystem: Over the years, Google’s AI efforts had splintered – from Assistant on phones to Bard in search to Duet AI in Workspace. By consolidating under the Gemini brand, Google aims to offer a single, coherent AI experience across all platforms.