Open Source LLM running offline on mobile using /e/OS with Phi-2 2.8 B transformer

Imagine a LLM running locally on a smartphone.

Could be used to fuel an offline assistant that would be able to easily add an appointment to your calendar, open an app, … without Privacy issues.

This has come to reality with this proof of concept using the Phi-2 2.8B transformer model running on /e/OS.

Very slow, so not usable until we have dedicated chips on SoCs, but works.

:pray: @Stypox

:point_down: :point_down: :point_down:
Capture d’écran 2024-03-12 à 18.45.53

Demo (video)

#opensource #AI on #mobile

Regain your privacy! Adopt /e/OS the unGoogled mobile OS and online servicesphone

12 Likes

Whatever crowdfunding, I am in.

2 Likes

Great development.
In the meantime interested users might enjoy the power of machine learning by using Tidy https://github.com/slavabarkov/tidy
Works great on /e/OS

Futo, my other recommendation for STT, powered by a local model of whisper, is here STT - Speech to text - #22 by tyxo

1 Like

Super cool! Out of curiosity, what hardware was this run on? Is there an easy way I can install and play with this myself?

It’s running on a Murena Fairphone 5

1 Like

Great great job, bravo :slight_smile:

Great job really, npu on mobile chip is mandatory hope to see soon on trusted hardwares. I would be very glad if my photos being processed on /e os without cloud, and does some syncronization that is not possible without certain apps.

This looks quite nice and impressive, but frankly, running an LLM for the basic type of tasks shown seems overkill.

You could cover all the use cases shown in the video by just doing voice recognition and then matching for the key phrases you would want to cover, either directly on the text or after parsing grammar.

Say the weather skill would need to look for a few patterns like this:
((“what is” | “what’s” | “tell me” | “show me”) + “the”) + “weather” + (optionally “in” [place])

And if [place] is not set, use current location. If location services is disabled, ask user for location with their home address as preselected default option.

You could even connect up other skills that query wikidata, so for example “what is the x of y” could be answered by looking up attribute y of knowledge graph object x, maybe in conjunction with a synonym table that could map “number of citizens” to “population” and so on.

If you want to even answer questions like “What’s the weather in the capital of Sweden”, you could add logic to the weather skill that if [place] isn’t found directly in the list of places, it could do a nested lookup to have another skill try to provide the answer.

You could easily run this type of system even on the most ancient of smartphones while covering a large number of the use cases people usually use these types of assistants for.

Of course it won’t answer questions like “What are the main arguments for and against a ban of Tik Tok”, but for administrative tasks like opening apps, setting timers, querying stocks and weather and changing settings, it should more than suffice.

1 Like

I think that what you describe is how actully Dicio is working, https://github.com/Stypox/dicio-android?tab=readme-ov-file#adding-skills :wink:

2 Likes

From a cursory look at it, it seems indeed that Dicio is an implementation of that concept. It seems to already be capable of most of what is covered in Gael’s LLM demo, just faster. From what I can see on the Github page, it just needs skills for integrating calendar, unit conversion, system settings and WikiData.

Without having tried the app yet, I’d say adapting Dicio for /e/ and contributing to it makes a lot more sense at the current stage than to build something LLM-based from scratch. Once running LLMs locally on devices becomes feasible (either through hardware acceleration or through optimized models), the /e/ team could contribute an LLM backend to Dicio.

However, I notice Gael tagged a user named “Stypox” in his post, which is the same name as the owner of the Dicio github repository and the publisher of Dicio on Google Play, so maybe everyone is way ahead of me here anyway :sweat_smile:

3 Likes

This topic was automatically closed after 7 days. New replies are no longer allowed.