Tag: Science & Technology

  • Scientists discover new superconductor material for wider use

    Researchers from Tokyo Metropolitan University have discovered a new superconducting material which can be more widely deployed in society. They combined iron, nickel, and zirconium, to create a new transition metal zirconide with different ratios of iron to nickel.
    While both iron zirconide and nickel zirconide are not superconducting, the newly prepared mixtures are, exhibiting a “dome-shaped” phase diagram typical of so-called “unconventional superconductors,” a promising avenue for developing high temperature superconducting materials, according to the study published in the Journal of Alloys and Compounds. Superconductors already play an active role in cutting-edge technologies, from superconducting magnets in medical devices and maglev systems to superconducting cables for power transmission.
    However, they generally rely on cooling to temperatures of around four Kelvin, a key roadblock in wider deployment of the technology.
    Scientists are on the lookout for materials which can show zero resistivity at higher temperatures, particularly the 77 Kelvin threshold at which liquid nitrogen can be used to cool the materials instead of liquid helium. Now, a team of researchers led by Associate Professor Yoshikazu Mizuguchi from Tokyo Metropolitan University have conceived a new superconducting material containing a magnetic element.
    For the first time, they showed that a polycrystalline alloy of iron, nickel, and zirconium shows superconducting properties. Curiously, both iron zirconide and nickel zirconide are not superconducting in crystalline form.

     

  • January 17 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F01%2FTIP-January-17-E-Edition.pdf”][vc_single_image image=”177811″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/01/TIP-January-17-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][/vc_column][/vc_row]

  • Google makes Gemini AI free to use in Gmail and Docs

    Google makes Gemini AI free to use in Gmail and Docs

    If youve been shelling out extra cash for Googles Gemini-powered AI features in Workspace apps, heres some good news: its now free. Google has announced that starting this week, all its AI tools formerly locked behind the Rs 1,500 per month Gemini Business plan are included in standard Workspace Business and Enterprise subscriptions. The move aims to make AI capabilities more accessible as Google races against Microsoft and OpenAI.
    Before you celebrate too much, theres a small caveat. Google is hiking the price of all Workspace plans to cover these new features. Most businesses will now pay about Rs 125 more per user per month. For context, the base subscription that previously cost Rs 900 ($12) will now be priced at Rs 1,050 ($14). Its not exactly a bank-breaker, but its a notable shift for companies managing large teams.
    With Googles Workspace AI suite, you get tools like email summaries in Gmail, automated meeting notes in Meet, spreadsheet design help in Sheets, and AI writing assistants in Docs. The Gemini bot, Googles flagship AI, acts as a personal assistant capable of summarizing emails, finding data, and even brainstorming ideas.
    Theres also NotebookLM Plus, a research assistant designed to break down complex topics. It lets users upload documents, extract insights, and share customised notebooks for collaboration.
    Googles decision to scrap the extra fee for Gemini seems strategic. The AI race is heating up, and rival Microsoft recently made its own AI features part of standard subscriptions for certain Microsoft 365 plans. Google is betting that by removing the financial barrier, it can convince more businesses to try its AI tools.
    Googles move to make AI a core part of its Workspace plans signals a shift in how businesses approach productivity. The company plans to keep rolling out new AI features over the next year, ensuring its tools remain at the forefront of workplace innovation.
    For users, the message is clear: AI isnt an optional add-on anymoreits the future of work. Whether youre excited or skeptical, one thing is certain: Googles Gemini is here to stay, and its now a standard part of doing business in the digital age.

  • Elon Musk’s SpaceX Starship explodes in flight test

    Elon Musk’s SpaceX Starship explodes in flight test

    A SpaceX Starship rocket broke up in space minutes after launching from Texas on Thursday, forcing airline flights over the Gulf of Mexico to alter course to avoid falling debris and setting back Elon Musk’s flagship rocket program. SpaceX mission control lost contact with the newly upgraded Starship, carrying its first test payload of mock satellites but no crew, eight minutes after liftoff from its South Texas rocket facilities at 5:38 pm EST (2238 GMT).
    Video shot by Reuters showed orange balls of light streaking across the sky over the Haitian capital of Port-au-Prince, leaving trails of smoke behind.
    “We did lose all communications with the ship – that is essentially telling us we had an anomaly with the upper stage,” SpaceX Communications Manager Dan Huot said, confirming minutes later that the ship was lost.

  • Microsoft to end support for Office apps on Windows 10 in October

    Microsoft to end support for Office apps on Windows 10 in October

    Microsoft has announced that it will end support for Microsoft Office apps on Windows 10 devices after the operating system reaches its end of support on October 14, 2025. Users will need to upgrade to Windows 11 to continue using Microsoft 365 apps without potential issues, the tech giant stated on Tuesday.
    Microsoft 365 Apps will no longer be supported after October 14, 2025, on Windows 10 devices. To use Microsoft 365 Applications on your device, you will need to upgrade to Windows 11, the company explained.
    This announcement also affects standalone Office versions, including Office 2024, Office 2021, Office 2019, and Office 2016, meaning these versions will no longer receive updates or technical support on Windows 10 devices.
    While the applications will still function beyond the support cutoff, Microsoft cautioned users about potential performance and reliability problems. We strongly recommend upgrading to Windows 11 to avoid performance and reliability issues over time, the company advised in a separate support document.
    Microsoft has been urging users to migrate to Windows 11 since its launch in October 2021, even dubbing 2025 the year of the Windows 11 PC refresh. However, adoption has been slow, with only 35 per cent of Windows users worldwide running Windows 11 as of now, according to Statcounter Global data. Meanwhile, Windows 10 remains the dominant version, powering 62 per cent of all Windows systems globally.
    A key barrier to Windows 11 adoption has been Microsofts stringent hardware requirements, particularly the need for TPM (Trusted Platform Module) 2.0. The feature, which Microsoft claims enhances resistance to tampering and cyberattacks, has been labelled non-negotiable for Windows 11 installations. Many users with older hardware have found the requirement difficult to meet, prompting the creation of workarounds to bypass it.
    In light of the challenges, Microsoft has offered some leeway for users unwilling or unable to upgrade. Home users can delay the switch to Windows 11 for an additional year by purchasing Extended Security Updates (ESU) for $30. Certain enterprise and specialised systems, including those using Long-Term Servicing Branch (LTSB) and Long-Term Servicing Channel (LTSC) editions, will continue to receive updates beyond the October 2025 cutoff.
    Although users can keep using their Windows 10 PCs and Office apps after support ends, they will no longer receive security updates, exposing them to increased risks over time. Microsoft has also reiterated that the free upgrade to Windows 11 remains available, provided users meet the minimum system requirements.
    Margaret Farmer, a representative for Microsoft, emphasised this point: You need to confirm that your computer meets the minimum system requirements for the update, she stated.
    The clock is ticking for the millions still on Windows 10. While the operating systems popularity has endured, Microsofts move to cut off Office app support adds another incentive for users to transition. For those who rely heavily on Microsoft 365 or other Office products, the decision to upgrade sooner rather than later is becoming increasingly pressing.

  • January 10 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F01%2FTIP-January-10-E-Edition.pdf”][vc_single_image image=”177477″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/01/TIP-January-10-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][/vc_column][/vc_row]

  • ‘Life sprouts in space’, says ISRO after cowpea seeds germinate under microgravity conditions

    ISRO has said the cowpea seeds it had sent to space on board the PSLV-C60 POEM-4 platform have germinated under microgravity conditions within four days of the launch of the mission.
    The space agency sent eight cowpea seeds as part of the Compact Research Module for Orbital Plant Studies (CROPS) experiment conducted by the Vikram Sarabhai Space Centre (VSSC) to study plant growth in microgravity conditions.
    “Life sprouts in space! VSSC’s CROPS experiment onboard PSLV-C60 POEM-4 successfully sprouted cowpea seeds in 4 days. Leaves expected soon,” ISRO said in a post on X. The PSLV-C60 mission placed two SpaDeX satellites in orbit on the night of December 30. The fourth stage of the rocket carrying the POEM-4 platform has been orbiting the earth with 24 onboard experiments at an altitude of 350 km.
    The CROPS experiment aims to understand how plants grow in the unique conditions of space, which is essential for future long-duration space missions.
    The experiment involves growing eight cowpea seeds in a controlled environment with active thermal regulation, simulating conditions that plants might encounter during extended space travel.
    CROPS is envisioned as a multi-phase platform to develop and evolve ISRO’s capabilities for growing and sustaining flora in extra-terrestrial environments.
    Designed as a fully-automated system, a five to seven-day experiment has been planned to demonstrate seed germination and plant sustenance until reaching the two-leaf stage in a microgravity environment.
    The cowpea seeds have been placed in a closed-box environment with active thermal control.
    Passive measurements, including camera imaging, oxygen and carbon dioxide concentrations, relative humidity, temperature, and soil moisture monitoring, are available for plant growth and monitoring, the space agency said.
    ISRO also posted a separate “selfie video” of the chaser satellite of the space docking experiment that is orbiting the earth at an altitude of 470 km.
    The chaser satellite is expected to dock with the target satellite in space on Tuesday, a feat that would make India only the fourth country to master this cutting-edge technology after Russia, the US and China.
    Source: PTI

  • Google Gemini is racing to win the AI crown in 2025

    Google CEO Sundar Pichai acknowledged at the company’s year-end strategy meeting that the AI models powering Google Gemini are behind OpenAI and ChatGPT but promised a real push in 2025 to get Gemini to outpace its rivals, as reported by CNBC.
    Pichai’s directive comes off as more serious than the usual corporate rah-rah; it’s a declaration that Google won’t lose any more ground in a race it once led. Google’s nearly bottomless coffers and enormous infrastructure give it a good chance of coming out on top in 12 months, but only because the company is no longer resting on the laurels it’s been polishing since the early 2000s.
    “In history, you don’t always need to be first, but you have to execute well and really be the best in class as a product,” Pichai said at the meeting. “I think that’s what 2025 is all about.”
    Google CEO Sundar Pichai acknowledged at the company’s year-end strategy meeting that the AI models powering Google Gemini are behind OpenAI and ChatGPT but promised a real push in 2025 to get Gemini to outpace its rivals, as reported by CNBC.
    Pichai’s directive comes off as more serious than the usual corporate rah-rah; it’s a declaration that Google won’t lose any more ground in a race it once led. Google’s nearly bottomless coffers and enormous infrastructure give it a good chance of coming out on top in 12 months, but only because the company is no longer resting on the laurels it’s been polishing since the early 2000s.
    “In history, you don’t always need to be first, but you have to execute well and really be the best in class as a product,” Pichai said at the meeting. “I think that’s what 2025 is all about.”

  • Google’s Gemini AI may come soon to Wear OS smartwatches

    Google’s Gemini AI may come soon to Wear OS smartwatches

    Last year, Gemini made its way to iOS devices through a dedicated app, and now Google is eyeing even bigger opportunities. One of the big announcements at CES 2025 was that Google TV devices will soon get a boost from Gemini. The highlight? You’ll be able to talk to your TV’s virtual assistant without needing to speak into the remote. This is all thanks to an upgraded voice assistant powered by Gemini AI, set to roll out later this year.
    But TVs aren’t the only focus. Wear OS, Google’s smartwatch platform, is also likely to get Gemini support very soon. There’s been speculation that this could happen alongside Google’s March 2025 Pixel Feature Drop. A recent report from 9to5Google pointed out some strong hints about this in the beta version of the Google App (v16.0.5). Buried in the code are references to Gemini as a wearable assistant. One line reads, “Easily talk back and forth to get more done with an assistant on your watch, reimagined with Google AI.” That wording suggests that the assistant on Wear OS will be more natural and conversational than before, allowing for more dynamic back-and-forth chats.
    When Gemini arrives on Wear OS, it’ll likely be through an update to the existing Google Assistant app. The basic way to call up the assistant probably won’t change much — you’ll still use the Hey Google wake word or long-press your watch’s side button. However, there’s also been talk about a new Hey Gemini hot word, which could replace or sit alongside Hey Google. This was spotted in demo videos for Android XR and could make its way to Wear OS as well.
    Google is clearly betting big on Gemini to redefine how we interact with its devices. From TVs to smartwatches, it’s all about making tech feel more human. While we’ll have to wait for exact dates, 2025 looks like it’s going to be another exciting year for Google’s AI-powered ecosystem.
    For the unaware, Google made significant strides with its Gemini AI in 2024, and it seems the momentum will continue going forward. Originally introduced as Bard, Google’s chatbot has since evolved into Gemini, which is gradually being integrated into a variety of the company’s products and services. From smartphones and tablets to Gmail, Google Docs, and Maps, Gemini is becoming a more common presence across Google’s ecosystem.

  • Android phones may soon support iPhone-like MagSafe wireless charging

    After years of waiting, it seems like Android phones are finally catching up with Apple’s MagSafe wireless charging. The Wireless Power Consortium (WPC) has confirmed that Qi2 wireless charging is coming to Android, with Samsung and Google leading the way. The announcement was made at CES 2025, bringing an exciting update for fans who have long wanted the Qi2 wireless charging standard for Android smartphones.
    For the unaware, Qi2 is the next big step in wireless charging, replacing the older Qi standard. It promises faster and more efficient charging, with speeds of up to 15W. The technology uses magnetic rings to align your device perfectly with the charger. This clever design reduces energy loss caused by misalignment, making it a more reliable option than before. While Apple’s MagSafe has been using similar technology since 2023, Android phones have mostly lagged behind — until now.
    Both Samsung and Google have officially committed to the Qi2 standard. Samsung announced that Galaxy phones will start supporting Qi2 later this year. However, it’s still unclear if the upcoming Galaxy S25 series, set to launch this month, will include native Qi2 support. Rumours suggest the S25 might offer Qi2 functionality through special cases, but full integration may have to wait until the Galaxy S26.
    Google’s plans are a bit more mysterious. The company has said it’s playing a major role in advancing the technology, especially with the development of Qi 2.2. Google aims to improve cross-brand compatibility, which has been a pain point for Android users. If successful, this could allow chargers and phones from different manufacturers to take full advantage of Qi2’s faster speeds.

  • January 3 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2025%2F01%2FTIP-January-3-E-Edition.pdf”][vc_single_image image=”176969″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2025/01/TIP-January-3-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=” https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/”][/vc_column][/vc_row]

  • WhatsApp Web to soon help users detect misleading info with ‘Reverse Image Search’ feature

    WhatsApp Web to soon help users detect misleading info with ‘Reverse Image Search’ feature

    WhatsApp is taking significant steps to curb misinformation by launching a reverse image search feature, now available for users on WhatsApp Web Beta. This new tool, developed in collaboration with Google, aims to help users verify the authenticity of images they receive within the app. The feature is designed to help detect whether an image has been manipulated or taken out of context, making it easier for users to identify fake content.
    How the Reverse Image Search Works
    The reverse image search can be accessed directly through WhatsApp without requiring users to download the image. By simply selecting an option to search the image on the web, WhatsApp will upload it to Google’s reverse image search, with the user’s permission. The search will then be conducted through the user’s default web browser. WhatsApp ensures that it does not have access to the content of the image during this process, with all actions managed by Google.
    This new feature comes amid WhatsApp’s ongoing efforts to enhance privacy and security for its users. The company recently launched several updates for iOS users, including new augmented reality (AR) effects and filters for video calls and photos. The latest update, version 24.25.93, introduces AR effects such as confetti, star windows, and underwater scenes, which can be accessed through the camera’s image wand icon. Users can also enjoy new document scanning tools, which include colour, grayscale, and black-and-white filters, along with an auto-shutter feature for improved scanning. WhatsApp has also shared a significant update regarding its safety efforts in India. The platform banned 73.6 million accounts throughout the year, with 13.7 million of those accounts being removed proactively between January and October. WhatsApp has long prioritised privacy and security, with major features like end-to-end encryption (2016), Face ID and Touch ID unlock (2019), disappearing messages (2020), and the Private Audience Selector (2023).

  • How can you protect your privacy while using AI tools?

    How can you protect your privacy while using AI tools?

    The adoption of AI tools and products is growing at a steady pace. Today, companies are rolling out AI chatbots to serve almost every need of users. There are AI chatbots that can write an essay, role-play as your partner, remind you to brush your teeth, take down notes during meetings, and more. These AI tools are mostly in the form of large language models (LLMs) such as OpenAI’s ChatGPT.
    However, the LLMs that are being developed and deployed everywhere could threaten users’ privacy as they are trained on large amounts of data gathered by indiscriminately scraping the information that is available online. Yet, many users remain unaware of the privacy and data protection risks that come with LLMs as well as other generative AI tools.
    Over 70 per cent of users interact with AI tools without fully understanding the dangers of sharing personal information, according to a recent survey. It also found that at least 38 per cent of users unknowingly revealed sensitive details to AI tools, putting themselves at risk of identity theft and fraud.
    Feeding in the right prompts could also cause LLMs to “regurgitate” personal user data as they are likely trained on data pulled from every nook and corner of the internet.
    Beware of social media trends
    Recently, a trend that went viral on social media urged users to ask an AI chatbot to “Describe my personality based on what you know about me”. Users were further encouraged to share sensitive data like their birth date, hobbies, or workplace. However, this information can be pieced together, leading to identity theft or account recovery scams. Risky Prompt: “I was born on December 15th and love cycling—what does that say about me?”
    Safer Prompt: “What might a December birthday suggest about someone’s personality?”
    Do not share identifiable personal data
    According to experts from TRG Datacenters, users should frame their queries or prompts to AI chatbots more broadly to protect their privacy.
    Risky Prompt: “I was born on November 15th—what does that say about me?”
    Safer Prompt: “What are traits of someone born in late autumn?”
    Avoid disclosing sensitive information about your children
    Parents can unintentionally share sensitive details such as their child’s name, school, or routine while interacting with an AI chatbot. This information can be exploited to target children.
    Risky Prompt: “What can I plan for my 8-year-old at XYZ School this weekend?”
    Safer Prompt: “What are fun activities for young children on weekends?”
    Never share financial details
    Over 32 per cent of identity theft cases stem from online data sharing, including financial information, according to a report by the US Federal Trade Commission (FTC).
    Risky Prompt: “I save $500 per month. How much should I allocate to a trip?”
    Safer Prompt: “What are the best strategies for saving for a vacation?”
    Refrain from sharing personal health information
    Since health data is frequently exploited in data breaches, avoid sharing personal medical histories or genetic risks with AI chatbots:
    Risky Prompt: “My family has a history of [condition]; am I at risk?”
    Safer Prompt: “What are common symptoms of [condition]?”

  • NASA probe safely completes closest-ever approach to the Sun

    NASA’s Parker Solar Probe successfully made the closest-ever approach to the Sun, as per news agency Reuters, coming within 3.8 million miles (6.1 million kilometers) of its surface.
    The probe endured scorching temperatures of up to 1,800 degrees Fahrenheit (982 degrees Celsius) as it ventured into the Sun’s outer atmosphere, called the corona. It reached speeds of up to 430,000 mph (692,000 kph), making it the fastest human-made object.
    After receiving an all-clear message, NASA confirmed on Friday that the probe is safe and functioning normally. The signal, known as a beacon tone, was received by the operations team at the Johns Hopkins Applied Physics Laboratory in Maryland late Thursday, indicating the spacecraft’s status.
    The spacecraft, which was launched in 2018, is on a mission to study the Sun, with scientists hoping that the data collected by the probe will help them better understand why the sun’s outer atmosphere is hundreds of times hotter than its surface.
    “This mission allows us to get closer to the Sun than ever before, helping scientists understand how the solar wind forms, why the Sun’s outer layer heats up to millions of degrees, and how energetic particles accelerate near light speed,” NASA explained.

    The data might also contribute to answering what drives the solar wind — the supersonic stream of charged particles which constantly blast away from the sun.

  • Biggest Innovations in AI, Science and Technology

    Biggest Innovations in AI, Science and Technology

    In 2024, significant innovations in AI, science, and technology emerged, including AI-powered scientific discovery, carbon-capturing microbes, and elastocalorics. These advancements enhance productivity, sustainability, and energy efficiency across industries. Notable trends like multimodal AI and quantum computing also promise to reshape the technological landscape, driving future growth.

    2024 has been an exciting year for artificial intelligence, science, and technology innovations. Many new developments have changed industries and improved our daily lives.
    In AI, we saw the rise of Generative AI 3.0. This advanced technology can understand and create content almost like a human. It has transformed fields like healthcare, education, and entertainment.
    In science, researchers discovered several Earth-like exoplanets. These findings have sparked renewed interest in exploring space and searching for life beyond our planet.
    Meanwhile, there were breakthroughs in quantum computing technology. New quantum processors can solve complex problems in seconds, opening doors to new possibilities.
    In this article, we’ll explore these incredible milestones in detail and highlight how they are shaping a smarter, more connected future. From smarter AI to space exploration and faster computing, 2024 truly showcased a year of innovation.
    AI for Scientific Discovery
    AI is revolutionising scientific research by enabling breakthroughs across various fields. One significant advancement is in protein structure prediction, where models like AlphaFold drastically reduce the time needed to determine structures from years to mere minutes.
    This capability accelerates drug discovery and helps address challenges such as antibiotic resistance.
    Additionally, AI aids in discovering new materials and improving battery efficiency, among other technological advancements.
    The integration of AI into scientific processes not only enhances outcomes but also transforms research methodologies into more collaborative and data-driven approaches.
    OpenAI’s o1 Model
    OpenAI’s o1 Model marks a substantial leap in artificial intelligence capabilities, especially in reasoning and problem-solving tasks.
    This model enhances the ability of AI to comprehend complex mathematical concepts and coding challenges effectively.
    By improving reasoning skills, the o1 Model facilitates more sophisticated interactions with users, allowing it to assist in educational contexts as well as professional environments.
    Its potential applications are extensive, ranging from educational tools to advanced scientific research assistance, positioning it as a valuable resource for developers and researchers alike.
    Google DeepMind’s GenCast
    Google DeepMind’s GenCast represents a groundbreaking advancement in weather prediction technology. Utilising advanced AI algorithms, GenCast delivers more accurate forecasts essential for agriculture and disaster preparedness.
    With reliable weather data at their disposal, farmers can make informed decisions that optimise crop yields while minimising losses from adverse weather conditions.
    Furthermore, enhanced weather predictions play a crucial role in disaster management by allowing communities to prepare better for extreme weather events, showcasing how AI directly impacts societal well-being through improved information dissemination.
    Microsoft’s Copilot Vision
    Microsoft’s Copilot Vision is an innovative tool designed to assist users with visual tasks across various applications.
    By integrating AI into everyday software environments, this tool enhances both productivity and creativity for users. It streamlines workflows by simplifying complex tasks while understanding user intent to provide relevant suggestions effectively.
    This significant advancement in human-computer interaction not only boosts individual productivity but also promotes collaboration among teams by facilitating more intuitive communication methods.
    Anthropic’s Claude 3.5 Sonnet
    Anthropic’s Claude 3.5 Sonnet introduces a new level of interaction between artificial intelligence and computer systems by mimicking human-like behaviours such as moving cursors, clicking buttons, and typing text.
    This capability signifies a shift toward more versatile and user-friendly AI interactions, allowing automation of routine tasks typically performed by humans.
    By enabling seamless interactions with software tools, Claude 3.5 Sonnet opens new possibilities for efficiency in environments where repetitive tasks are prevalent, allowing users to concentrate on strategic activities while the AI manages routine operations effectively.
    Multimodal AI
    Multimodal AI is set to transform how artificial intelligence systems interact with users by integrating multiple forms of data, such as text, audio, images, and video.
    This innovation allows AI to understand and process information more like humans do, leading to more intuitive and effective user experiences.
    For instance, in healthcare, multimodal AI can analyse patient data from various sources to provide comprehensive insights into health conditions. In retail, it enhances customer interactions by combining visual recognition with voice commands, improving service quality.
    As this technology evolves, it will foster more natural and empathetic interactions across various sectors, making AI applications more accessible and user-friendly.
    Agentic AI
    Agentic AI represents a significant shift towards autonomy in artificial intelligence systems. Unlike traditional models that react to user inputs, agentic AI can act independently to achieve specific goals.
    This innovation enables systems to analyse their environments and make proactive decisions without direct human intervention.
    For example, in environmental monitoring, an agentic AI could autonomously detect changes in ecosystems and initiate preventive actions against potential hazards.
    In finance, it can manage investment portfolios adaptively based on real-time market conditions. The development of agentic AI opens new possibilities for automation across industries, enhancing efficiency and responsiveness.
    Explainable AI (XAI)
    Explainable AI (XAI) aims to make the decision-making processes of AI systems transparent and understandable to humans.
    As AI becomes more integrated into critical areas like healthcare and finance, the need for trust in these systems grows.
    XAI provides insights into how models arrive at their conclusions, allowing users to comprehend the rationale behind decisions. This transparency is crucial for regulatory compliance and ethical considerations in AI deployment.

  • AI models now create text-to-video clips, promise a new future

    AI models now create text-to-video clips, promise a new future

    Imagine a vivid dream filled with mythical characters, and vast landscapes, and you as the protagonist in futuristic armour. Now, imagine describing that dream and watching it transform into a high-definition video. Sounds impossible? Not anymore.
    Enter Veo2, Google DeepMind’s latest text-to-video platform, which creates realistic 4K videos from simple text prompts. Its capabilities have stunned the internet with mesmerising demo videos, leaving many in awe — and some video creators worried about their jobs.
    But Veo2 isn’t alone. OpenAI’s Sora is another challenger in the fray, bringing its approach to this exciting new age of AI-generated video creation. So, how do these platforms compare? Which one is better? And what does this mean for the future of video creation?
    Veo 2 vs Sora
    Sora, launched in early December this year, is available to ChatGPT Plus users globally. While the OpenAI video maker has gained a headstart with general users being able to use the platform, Google’s Veo2 is still in its beta testing.
    Seemingly, Google’s Veo2 has an advantage over Sora thanks to multiple reasons:
    4K video resolution: Veo2 offers users video resolution up to 4K which means better quality videos. In contrast, Sora offers a maximum resolution of 1080p which isn’t bad but still 4K is 4K.
    Video duration: Veo2 renders videos up to 2 minutes in length. Compared to this, Sora creates shorter videos of up to 20 seconds.
    – Cinematic control: Veo2 offers virtual camera control with options for adding cinematic movements like pan, tilts, etc. Its accuracy has stunned many users online. You can even play around with the lighting for a particular scene. This helps enhance storytelling. Sora focuses more on style presets and storyboarding. It’s similar to editing photos on your phone. You can choose between each adjustment manually or use the presets that phones offer.
    This video demonstrates the cinematic difference between the two. This tracking shot along a busy city street captured on Veo2 illustrates better results in terms of the camera angles and lighting as compared to Sora.
    Or like how in this post, you can see the slow zoom-in shot or how the camera rotates around a stack of TV. Despite the prompt telling the camera to rotate, the output in Sora has rendered a static camera shot. This stifles a creator’s vision in many ways.
    – Realism: Some Veo2 renders online show its true ability to output photorealistic videos. This is also true in the case of physics-based motion accuracy which makes animations look more natural. This is an area where Sora struggles.
    The following video by X user Ruben Hassid shows the differences between the two video engines. There are several inconsistencies in the results Sora has generated whereas Veo2 has been able to render more lifelike results.
    Ideally, Veo2 is a comprehensive choice for video creators, but sadly it’s not available to general users. Google’s DeepMind has made the tool available only to a select few users, with no clarity on when the final version will be rolled out.
    Sora, on the other hand, is available for commercial use with a ChatGPT Plus subscription that costs around Rs 1676 in India.’

  • NASA’s Parker Solar probe to make historic closest approach to the Sun

    NASA’s Parker Solar probe to make historic closest approach to the Sun

    A NASA spacecraft aims to fly closer to the sun than any object sent before. The Parker Solar Probe was launched in 2018 to get a close-up look at the sun. Since then, it has flown straight through the sun’s corona: the outer atmosphere visible during a total solar eclipse.
    The next milestone: closest approach to the sun. Plans call for Parker on Tuesday to hurtle through the sizzling solar atmosphere and pass within a record-breaking 3.8 million miles (6 million kilometers) of the sun’s surface. At that moment, if the sun and Earth were at opposite ends of a football field, Parker “would be on the 4-yard line,” said NASA’s Joe Westlake.
    Mission managers won’t know how Parker fared until days after the flyby since the spacecraft will be out of communication range.
    Parker planned to get more than seven times closer to the sun than previous spacecraft, hitting 430,000 mph (690,000 kph) at closest approach. It’s the fastest spacecraft ever built and is outfitted with a heat shield that can withstand scorching temperatures up to 2,500 degrees Fahrenheit (1,371 degrees Celsius).
    It’ll continue circling the sun at this distance until at least September. Scientists hope to better understand why the corona is hundreds of times hotter than the sun’s surface and what drives the solar wind, the supersonic stream of charged particles constantly blasting away from the sun.
    The sun’s warming rays make life possible on Earth. But severe solar storms can temporarily scramble radio communications and disrupt power.
    The sun is currently at the maximum phase of its 11-year cycle, triggering colorful auroras in unexpected places.
    “It both is our closest, friendliest neighbor,” Westlake said, “but also at times is a little angry.”

  • ISRO to study how crops grow in space on

    PSLV-C60 mission
    Demonstration of seed germination in outer space, a robotic arm to catch a tethered debris there, and testing of green propulsion systems are some of the experiments planned on the POEM-4 — the fourth stage of ISRO’s PSLV rocket that remains in orbit after launching a satellite. The PSLV-C60 mission, slated for a yearend launch, is scheduled to place the twin satellites ‘Chaser and Target’ to demonstrate the space docking technologies that are crucial for building India’s space station.
    The PSLV Orbital Experiment Module (POEM) will carry 24 experiments — 14 from various ISRO labs and 10 from private universities and start-ups — to demonstrate various technologies in space.
    ISRO plans to grow eight cowpea seeds from seed germination and plant sustenance until the two-leaf stage in a closed-box environment with active thermal control as part of the Compact Research Module for Orbital Plant Studies (CROPS) developed by the Vikram Sarabhai Space Centre.
    The Amity Plant Experimental Module in Space (APEMS), developed by Amity University, Mumbai, plans to study the growth of spinach in a microgravity environment.
    Two parallel experiments will be carried out simultaneously — one on POEM-4 in space and one on the ground at the university. The experiment’s outcome will provide insights into how higher plants sense the direction of gravity and light. The Debris Capture Robotic Manipulator, developed by VSSC, will demonstrate the capturing of tethered debris by a robotic manipulator using visual servoing and object motion prediction in the space environment.

  • December 27 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2024%2F12%2FTIP-December-27-E-Edition.pdf”][vc_single_image image=”176335″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2024/12/TIP-December-27-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][/vc_column][/vc_row]

  • Apple releases iOS 18.3 first developer beta

    Apple releases iOS 18.3 first developer beta

    Apple has released the first developer beta version of its upcoming iOS 18.3 update, just days after introducing iOS 18.2 that was packed with Apple Intelligence features. While Apple has not yet disclosed specific changes in the latest beta, there is speculation about support for more smart home devices, such as robot vacuum cleaners, within the Home app. Although the first developer beta for iOS 18.3 is now available, the stable version of the update is expected to roll out in January next year.
    What to expect from upcoming iOS updates
    The iOS 18.2 update added advanced Apple Intelligence features, such as image generation tools and system-wide ChatGPT integration. However, there are more intelligence features that Apple has yet to roll out, including on-screen awareness, personal context understanding, and more for the digital assistant Siri. Apple has also not yet launched the Priority Notification feature, which uses AI to sort notifications based on their importance.
    More advanced Siri features are expected with the iOS 18.4 update in March next year, but iOS 18.3 could bring some minor tweaks and changes to existing Apple Intelligence features.
    Apple Intelligence features with iOS 18.2
    Last week, Apple released the iOS 18.2 update for eligible iPhones, adding several new Apple Intelligence features, including support for OpenAI’s ChatGPT within Siri and Writing Tools, Image Playground, and Genmoji. Here are the key Apple Intelligence features:
    – Image Playground: A tool for generating images using text input or existing images from the Photos app. Users can create images based on themes, styles (such as animation, 3D, or illustrations), and specific requests. It is integrated within native apps like Messages, Freeform, and Keynote and also available as a standalone app.
    – Genmoji: Integrated into the emoji keyboard, Genmoji enables users to generate custom emojis using text and images.
    – Image Wand: Introduced in the Notes app, this tool transforms sketches into illustrations or generates images based on surrounding text.
    – Describe Your Change (Writing Tools): This new option complements existing tools like Rewrite, Proofread, and Summarise, giving users more control over writing style and expressiveness.
    – Compose with ChatGPT (Writing Tools): OpenAI’s chatbot integration allows users to generate text and accompanying images, accessible systemwide.
    – Visual Intelligence: iPhone 16 series introduces a new Visual Intelligence feature. By long pressing the new Camera Control button, users can get information about real-life objects using Google image search or ChatGPT. It can also translate text, summarise it, and even detect and save phone numbers and email addresses.
    – ChatGPT in Siri: With ChatGPT integration, Siri can now suggest using ChatGPT for specific queries and provide ChatGPT-powered responses directly. Users can enable or disable this integration and manage shared information.
    While the iOS 18.2 update is available for all iPhones running iOS 18 versions, Apple Intelligence features are only available on iPhone 15 Pro models and the new iPhone 16 series.

  • Google unveils Veo 2 video AI generator to compete with OpenAI’s Sora

    Google unveils Veo 2 video AI generator to compete with OpenAI’s Sora

    Google has introduced a new and improved Veo 2 video generator model to compete with the likes of OpenAI’s Sora. The company claims that the successor to the original Veo AI model can create realistic motion and high-quality output of up to 4K, which it says is better than leading AI video generator platforms. Alongside this, Google also announced the latest Imagen 3 version and new Whisk model to create a single image from multiple visuals. Here is everything to know. Google shared a series of short video clips created using Veo 2, which shows that this platform can generate hyper-realistic videos of animals and food. We can also see animated clips of humans, all of which are 8-second videos. “Veo 2 outperforms other leading video generation models, based on human evaluations of its performance,” Google said. While the company hasn’t mentioned the names of the rivals, it is likely pointing towards OpenAI’s Sora — which is also a video generator. In the benchmark list, the company has added a graph, which claims that its Veo 2 model is preferred more by people in comparison to Meta Movie Gen, Kling V1.5, Minimax and Sora Turbo.
    The samples shared by Google look great, but some scenes with motions are seemingly a bit inaccurate. A few details in parts of a frame are missing. Google acknowledges this and says that complete consistency throughout complex scenes or those with complex motion still remains a challenge. But, the overall quality of the videos is seemingly quite impressive. “While Veo 2 demonstrates incredible progress, creating realistic, dynamic, or intricate videos, and maintaining complete consistency throughout complex scenes or those with complex motion, remains a challenge. We’ll continue to develop and refine performance in these areas,” said Google DeepMind.
    As for the Imagen 3 model, Google claims it can now create brighter and more realistic images with vibrant hues, better colour balance, and fidelity. The company is also claiming that it can currently produce highly detailed textures and attractive visuals.

  • YouTube says it will soon start detecting AI avatars of celebs

    YouTube is stepping up to protect creators from misuse of their likenesses with the help of AI. The platform is teaming up with the Creative Artists Agency (CAA) to roll out tools that allow creators and celebrities to detect AI-generated content that uses their image, face, or voice. This partnership aims to give individuals more control over how their digital likeness is used, especially as artificial intelligence becomes more sophisticated and widespread.
    Starting early next year, YouTube will test these tools with celebrities and athletes. The idea is to let them find videos on the platform that mimic their face, voice, or other aspects of their identity using AI and make it easier to request that such content be removed. Once this initial phase is complete, YouTube plans to expand the program to include top creators on the platform, creative professionals, and other influential figures. This move is expected to protect YouTube’s most prominent users, who often have to deal with issues like impersonation and misuse of their image.
    In September, YouTube had already announced plans to develop tools that help manage AI-generated depictions of creators, including their voices. Now, the company is building on that commitment by giving celebrities the ability to deal with these issues on a larger scale. This is especially relevant as AI tools can now create hyperrealistic versions of someone’s face, voice, or body, which can then be used in ways that the original person never agreed to.

  • December 20 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2024%2F12%2FTIP-December-20-E-Edition.pdf”][vc_single_image image=”175828″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2024/12/TIP-December-20-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][/vc_column][/vc_row]

  • December 13 New York & Dallas E – Edition

    [vc_row][vc_column][vc_custom_heading text=”E-Edition” font_container=”tag:h2|text_align:center” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2Fwp-content%2Fuploads%2F2024%2F12%2FTIP-December-13-E-Edition.pdf”][vc_single_image image=”175538″ img_size=”full” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/wp-content/uploads/2024/12/TIP-December-13-E-Edition.pdf”][/vc_column][/vc_row][vc_row][vc_column width=”2/3″][vc_custom_heading text=”Lead Stories This Week” google_fonts=”font_family:Istok%20Web%3Aregular%2Citalic%2C700%2C700italic|font_style:700%20bold%20regular%3A700%3Anormal” css=”” link=”url:https%3A%2F%2Fwww.theindianpanorama.news%2F”][vc_wp_posts number=”5″ show_date=”1″][/vc_column][vc_column width=”1/3″][vc_single_image image=”82828″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][vc_single_image image=”82829″ img_size=”medium” alignment=”center” onclick=”custom_link” css=”” link=”https://www.theindianpanorama.news/advertising-media-kit-portal-indian-panorama/ “][/vc_column][/vc_row]

  • Isro fires LVM3 cryogenic engine that will power Gaganyaan Mission

    Isro fires LVM3 cryogenic engine that will power Gaganyaan Mission

    The Indian Space Research Organisation (Isro) has achieved a key milestone in its space program with the successful sea-level hot test of the CE20 Cryogenic Engine on November 29, 2024. Conducted at the Isro Propulsion Complex, the test is a critical step in enhancing the engine’s performance and reliability.
    The CE20 engine powers the upper stage of Isro’s LVM3 launch vehicle, which has played a pivotal role in missions like Chandrayaan-2, Chandrayaan-3, and the upcoming Gaganyaan human spaceflight project.
    The engine has already proven its capability in six successful LVM3 missions and is designed to operate at thrust levels ranging from 19 tonnes to an upgraded 22 tonnes for future missions.
    One of the test’s highlights was the demonstration of a multi-element igniter, essential for the engine’s restart capability in space. This innovation ensures greater flexibility for complex missions.
    Additionally, Isro introduced a Nozzle Protection System to address challenges such as vibrations and thermal stress during sea-level testing. The system simplifies testing and reduces costs while maintaining safety and efficiency.
    The engine and testing facility performed flawlessly, meeting all expected performance parameters. With this success, the CE20 engine is now further prepared to support India’s ambitious Gaganyaan mission, which will send three astronauts into orbit.
    This achievement shows Isro’s ongoing efforts to develop advanced space technology, enhance payload capacities, and strengthen India’s position in global space exploration.
    As Isro continues to refine its capabilities, the CE20 engine remains central to future missions, paving the way for greater achievements in space.