Global AI Weekly

Issue number: 34 | Tuesday, January 2, 2024

Highlights

if(article.Image != null) { 10 AI Predictions For 2024 }

10 AI Predictions For 2024

The world of AI will evolve in dramatic and surprising ways in 2024.

www-forbes-com.cdn.ampproject.org

if(article.Image != null) { This week in AI: AI ethics keeps falling by the wayside }

This week in AI: AI ethics keeps falling by the wayside

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

techcrunch.com

if(article.Image != null) { LLMLingua: Innovating LLM efficiency with prompt compression }

LLMLingua: Innovating LLM efficiency with prompt compression

Advanced prompting technologies for LLMs can lead to excessively long prompts, causing issues. Learn how LLMLingua compresses prompts up to 20x, maintaining quality, reducing latency, and supporting improved UX. The post LLMLingua: Innovating LLM efficiency with prompt compression appeared first on Microsoft Research.

microsoft.com

if(article.Image != null) { GitHub makes Copilot Chat generally available, letting devs ask questions about code }

GitHub makes Copilot Chat generally available, letting devs ask questions about code

Earlier this year, GitHub rolled out Copilot Chat, a ChatGPT-like programming-centric chatbot, for organizations subscribed to Copilot for Business. Copilot Chat more recently came to individual Copilot customers — those paying $10 per month — in beta. And now, GitHub’s launching Chat in general availability for all users.

techcrunch.com

Video

if(article.Image != null) { Azure AI Document Intelligence - OCR on steroids }

Azure AI Document Intelligence - OCR on steroids

In the late 1920s and into the 1930s, Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives using an optical cod...

youtube.com

if(article.Image != null) { Herding LLMs Towards Structured NLP }

Herding LLMs Towards Structured NLP

With the rise of the latest generation of large language models (LLMs), prototyping language processing (NLP) applications have become easier and more access...

youtube.com

Articles

if(article.Image != null) { Giga ML wants to help companies deploy LLMs offline }

Giga ML wants to help companies deploy LLMs offline

AI is all the rage — particularly text-generating AI, also known as large language models (think models along the lines of ChatGPT). In one recent survey of ~1,000 enterprise organizations, 67.2% say that they see adopting large language models (LLMs) as a top priority by early 2024. But barriers stand in the way.

techcrunch.com

if(article.Image != null) { Image recognition accuracy: An unseen challenge confounding today’s AI }

Image recognition accuracy: An unseen challenge confounding today’s AI

“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.

news.mit.edu

if(article.Image != null) { Building a Million-Parameter LLM from Scratch Using Python }

Building a Million-Parameter LLM from Scratch Using Python

A Step-by-Step Guide to Replicating LLaMA Architecture

levelup.gitconnected.com

if(article.Image != null) { NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation }

NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation

We’re proud to have 100+ accepted papers At NeurIPS 2023, plus 18 workshops. Several submissions were chosen as oral presentations and spotlight posters, reflecting groundbreaking concepts, methods, or applications. Here’s an overview of those submissions. The post NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation appeared first on Microsoft Research.

microsoft.com

if(article.Image != null) { Ambient AI Is Here, And We Are Blissfully Unaware Of It }

Ambient AI Is Here, And We Are Blissfully Unaware Of It

'AI will evolve to become an undercover operating system for professionals, particularly when it comes to using the technology for research and idea generation.'

forbes.com

if(article.Image != null) { MIT Generative AI Week fosters dialogue across disciplines }

MIT Generative AI Week fosters dialogue across disciplines

During the last week of November, MIT hosted symposia and events aimed at examining the implications and possibilities of generative AI.

news.mit.edu

if(article.Image != null) { Classifying Source code using LLMs — What and How }

Classifying Source code using LLMs — What and How

Iterating prompts can end up with a super detailed classification context; trying to nail edge cases, to better describe our intent, like in our previous example, not to rely on the LLM definition for ‘ malicious’ but instead to explain how we see malicious snippets. Consider for example a classic use case — Spam Detection; the base approach will be to train a simple BOW classifier which can be deployed on weak (and therefore cheap) machines or even just to inference on edge devices (totally free).

towardsdatascience.com

>