Global AI Weekly

Issue number: 34 | Tuesday, January 2, 2024

Highlights

10 AI Predictions For 2024

10 AI Predictions For 2024

The world of AI will evolve in dramatic and surprising ways in 2024.

www-forbes-com.cdn.ampproject.org

This week in AI: AI ethics keeps falling by the wayside

This week in AI: AI ethics keeps falling by the wayside

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

techcrunch.com

LLMLingua: Innovating LLM efficiency with prompt compression

LLMLingua: Innovating LLM efficiency with prompt compression

Advanced prompting technologies for LLMs can lead to excessively long prompts, causing issues. Learn how LLMLingua compresses prompts up to 20x, maintaining quality, reducing latency, and supporting improved UX. The post LLMLingua: Innovating LLM efficiency with prompt compression appeared first on Microsoft Research.

microsoft.com

GitHub makes Copilot Chat generally available, letting devs ask questions about code

GitHub makes Copilot Chat generally available, letting devs ask questions about code

Earlier this year, GitHub rolled out Copilot Chat, a ChatGPT-like programming-centric chatbot, for organizations subscribed to Copilot for Business. Copilot Chat more recently came to individual Copilot customers — those paying $10 per month — in beta. And now, GitHub’s launching Chat in general availability for all users.

techcrunch.com

Video

Azure AI Document Intelligence - OCR on steroids

Azure AI Document Intelligence - OCR on steroids

In the late 1920s and into the 1930s, Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives using an optical cod...

youtube.com

Herding LLMs Towards Structured NLP

Herding LLMs Towards Structured NLP

With the rise of the latest generation of large language models (LLMs), prototyping language processing (NLP) applications have become easier and more access...

youtube.com

Articles

Giga ML wants to help companies deploy LLMs offline

Giga ML wants to help companies deploy LLMs offline

AI is all the rage — particularly text-generating AI, also known as large language models (think models along the lines of ChatGPT). In one recent survey of ~1,000 enterprise organizations, 67.2% say that they see adopting large language models (LLMs) as a top priority by early 2024. But barriers stand in the way.

techcrunch.com

Image recognition accuracy: An unseen challenge confounding today’s AI

Image recognition accuracy: An unseen challenge confounding today’s AI

“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.

news.mit.edu

Building a Million-Parameter LLM from Scratch Using Python

Building a Million-Parameter LLM from Scratch Using Python

A Step-by-Step Guide to Replicating LLaMA Architecture

levelup.gitconnected.com

NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation

NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation

We’re proud to have 100+ accepted papers At NeurIPS 2023, plus 18 workshops. Several submissions were chosen as oral presentations and spotlight posters, reflecting groundbreaking concepts, methods, or applications. Here’s an overview of those submissions. The post NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation appeared first on Microsoft Research.

microsoft.com

Ambient AI Is Here, And We Are Blissfully Unaware Of It

Ambient AI Is Here, And We Are Blissfully Unaware Of It

'AI will evolve to become an undercover operating system for professionals, particularly when it comes to using the technology for research and idea generation.'

forbes.com

MIT Generative AI Week fosters dialogue across disciplines

MIT Generative AI Week fosters dialogue across disciplines

During the last week of November, MIT hosted symposia and events aimed at examining the implications and possibilities of generative AI.

news.mit.edu

Classifying Source code using LLMs — What and How

Classifying Source code using LLMs — What and How

Iterating prompts can end up with a super detailed classification context; trying to nail edge cases, to better describe our intent, like in our previous example, not to rely on the LLM definition for ‘ malicious’ but instead to explain how we see malicious snippets. Consider for example a classic use case — Spam Detection; the base approach will be to train a simple BOW classifier which can be deployed on weak (and therefore cheap) machines or even just to inference on edge devices (totally free).

towardsdatascience.com

>