Build /e/, docker, write a new local_manifest

I have a question about creating a local_manifest for Samsung A3 a3xelte.

I see from e / os / local_manifests · GitLab that there are only a dozen devices listed with a local_manifest and most files are only 1 to 4 months old. Perhaps I missed some instructions, but I am guessing that this is a fairly new way of doing things. Is there an instruction page I missed?

I think I have the information I need for a local_manifest (from roomservice.xml + known sources). Is there a machine way to create a local_manifest or is it written by hand?

Manifests are mostly only needed for devices which are not officially supported by Lineage OS. For these devices, the /e/ ROMs were mostly created first as unofficial ROMs, with local manifests. For devices that are supported by LOS, the /e/ build system knows to look in the LOS repos, so no local manifest is required.

I’ve always written them by hand: usually by taking a working manifest for a different device, and modifying it as required. Note that any errors in the local manifest (syntax errors, typos) will result in your build failing in non-obvious ways. If your build does fail, your local manifest is often the first place to look.

Good luck! :wink:


Some time ago I found a very interesting thread at XDA :


Thank you both for your support. I was following the local_manifest suggestion to force me to investigate the build details deeper.

I went for a local_manifest pointing at remote=“e” trying to “copy” a Samsung j5y17lte.xml from the gitlab link above.

I have a partition dedicated to a build of /e/ for Samsung A3. There is zero content on the output side (zips). After 32 hours downloading I now have a 100% full 373 GB partition. Snips from such logs that exist amount to;

sed: couldn’t flush stdout: No space left on device
Syncing branch repository …
/root/ line 282: echo: write error: No space left on device

With no log, I have yet to consolidate my learning!

Any suggestions how I could prune the contents, restart the build with sync stopped, so that I could at least get to see a new error report!

Edit. I think “pruning” is unnecessary - it is an ext4 partition, I seem to remember the standard layout allows a 5% root accessible margin - looking it up!

Thanks in advance.

100% of 373GB seems way to high to me, but I’m not aware of Samsung building …

For information, I could build a Q 0.17.1 /e/ ROM for a Xiaomi on a small VM, here is what it looks like after build :

root@ubuntu:~# df -h | grep '/dev/sd'
/dev/sda1        32G   19G   12G  62% /
/dev/sdb        344G  187G  140G  58% /srv

NB : after is important : some big files are removed at end of .zip building. However the build didn’t suffer any lack of disk space :wink:

If it can help, below is the du -shx * of src/Q :

124K    android
0       Android.bp
85M     art
45M     bionic
13M     bootable
0       bootstrap.bash
16M     build
1.4G    cts
27M     dalvik
222M    developers
150M    development
71M     device
8.1G    external
1.9G    frameworks
405M    hardware
989M    kernel
87M     libcore
448K    libnativehelper
25M     lineage
5.1M    lineage-sdk
4.0K    Makefile
8.0K    out
950M    packages
916K    pdk
8.7M    platform_testing
33G     prebuilts
30M     sdk
508M    system
403M    test
102M    toolchain
1.7G    tools
43G     vendor

(vendor/samsung is 3.4GB)

Thank you @smu44 for taking the time to share your folder sizes with du -shx * of src/Q.

My result was very similar, but when I go up a level I get:

/e-partition/srv/e$ du -shx *
6.4G    ccache
17M     logs
366G    src	
8.0K    zips

And an output (confirmed by parted) that never changed

df -h /e-partition
Filesystem      Size  Used Avail Use% 
/dev/sda4       373G  373G     0 100%

A File manager shows no files, .files or folders to account for this “transient data”.
Docker appeared to have such robust backup that prune commands, even when reporting success, did not seem to interfere with the “job in progess”.
Seemed significant, repo-log often mentions “pruning disabled” during the download stage which I could not exit and the build could not finish and flush.
After much searching I concluded issue of “my docker partition is 100% full” seems problematic on Linux, so I decided to format the partition.

Wise words from @Petefoth

any errors in the local manifest (syntax errors, typos) will result in your build failing in non-obvious ways

turned out to be quite true! :blush:

Lesson learned, new build in progress!

:crossed_fingers: !

If it happens again please share repo log and, if available, build log.

You may also want to share Docker container log :

  • docker container ls -a
  • then docker container logs --details xxxxxx (replace xxxxxx with last container ID from previous command)

To show all used space by Docker : docker system df -v
To cleanup Docker : docker system prune -a --volumes (you may want to use docker container ps then docker container kill xxxxxx before cleanup :wink: )

Eventually, please share Docker launch command (along with local repo manifests, if used), so I could reproduce your build.

Good news (or kind of :smiley: ).
Today I launched a brand new build (Xiaomi Mi mix 2, v0.18-q), and I have exactly the same problem as you do !

root@ubuntu:/srv# du -sBG src/*
294G    src/Q
root@ubuntu:/srv# du -sBG src/Q/* | sort -n
0G      src/Q/Android.bp
0G      src/Q/bootstrap.bash
1G      src/Q/android
1G      src/Q/art
1G      src/Q/bionic
1G      src/Q/bootable
1G      src/Q/build
1G      src/Q/dalvik
1G      src/Q/developers
1G      src/Q/development
1G      src/Q/device
1G      src/Q/hardware
1G      src/Q/kernel
1G      src/Q/libcore
1G      src/Q/libnativehelper
1G      src/Q/lineage
1G      src/Q/lineage-sdk
1G      src/Q/Makefile
1G      src/Q/out
1G      src/Q/packages
1G      src/Q/pdk
1G      src/Q/platform_testing
1G      src/Q/sdk
1G      src/Q/system
1G      src/Q/test
1G      src/Q/toolchain
2G      src/Q/cts
2G      src/Q/frameworks
2G      src/Q/tools
9G      src/Q/external
33G     src/Q/prebuilts
43G     src/Q/vendor

I’ll try to take out some garbage, format the build filesystem as you did, and retry …

That can no way be good news!

It was really that no version of the kill command would work for me! If I had managed to kill the started process, maybe I could have deleted it.

I have pages of logs, deteriorating in quality as my frustration grew! I made a mental link with the overlay2, in the root filesystem /var/lib/docker/overlay2/. The “root” overlay could not be touched by “user” kill! I think that was the thing that made me go for format.

(This overlay even wanted to interfere after the e-partition was deleted, but I beat it that time!)

The format option has a big downside - I am still downloading, 46 hours later. I have an excellently tuned conky monitor telling me I have 136G downloaded of estimated target 140G.

Bon chance.

Edit. Incidentally tune2fs did not seem able to ameliorate the situation for me as docker seemed to have given itself permission to write into the 5% reserve. That had seemed a good route to try - but I reduced the reserve to 2% with no effect! Later, I felt that if I was on the point of doing a format, I could reduce the reserve to zero. It was only at zero reserve that I liberated a few MB! (All that happened when I restarted the build was that a download restarted! - perhaps with more skill, I could have started the equivalent of make clean ?)

I think it’s good news : you are not alone with this problem to solve :slight_smile:

I really don’t understand your problems with Docker, never seen such behavior in my test VM …
Docker overlays are self-managed, normally you just have to prune container/images.
What is weird is that you can’t kill the container(s) !
Does it survive a reboot ? Did you try a systemctl kill docker ?
I’m wondering if an unusual growing of overlay storage may be caused by a missing mapping in your Docker launch command (the -v lines).
Some reading : Why is Docker filling up /var/lib/docker/overlay2? - Stack Overflow , server - Docker overlay2 eating Disk Space - Stack Overflow , Use the OverlayFS storage driver | Docker Documentation

Some details about my test VM, maybe it can help …

  • Ubuntu 18 LST, not really up to date (I’m too lazy :wink: ) in a VMware WS VM (16 CPU, 24GB RAM)
  • Docker installed as instructed by /e/ documentation
  • system disk is 32 GB (1-partition scheme)
  • build disk is an additional 350GB, mounted over /srv
  • I run everything as root (it’s a test VM …)
  • Docker launch script in /srv, launched with root in a text console (pts) :
root@ubuntu:/srv# cat
docker pull
docker run \
-v "/srv/src:/srv/src" \
-v "/srv/zips:/srv/zips" \
-v "/srv/logs:/srv/logs" \
-v "/srv/ccache:/srv/ccache" \
-v "/srv/mirror:/srv/mirror" \
-e "BRANCH_NAME=v0.18-q" \
-e "DEVICE_LIST=chiron" \
-e "REPO=" \
-e "ANDROID_JACK_VM_ARGS=-Dfile.encoding=UTF-8 -XX:+TieredCompilation -Xmx20G" \
  • of course, I created src, zips,logs,ccache,mirror dirs before launching (755)

About the disk space problem in /srv :

  • the attempt from last evening (after formatting /srv with EXT4) leads to the same result, and this is also good news : I can reproduce the problem every time :wink:
  • as I already knew, ext filesystems are very bad when it comes to some usage : they can waste lots of disk space (I found some relevant readings about that : Same file and folders on same disk but different sizes)
  • as I did for many production servers over the years, i formatted the /srv again with XFS, build is running … (some info here :
  • at this time, overlay is stable at it’s usual size :

root@ubuntu:/srv# df -h |grep overlay
overlay          32G   18G   12G  60% /var/lib/docker/overlay2/5500dabcf9b85c08b8645f6a0f5e5244013510388afe0d0e208461874ff0d1f9/merged

Same behavior with XFS, never seen that before :frowning:
Just re-launched without CCache, as an attempt to see if there is much missing …

Hi @smu44, it is great to have you at my side! Turns out I am being a bit slow to adjust. I was convinced my problem was all to do with my faulty local_manifest …

I really don’t understand your problems with Docker, never seen such behavior in my test VM … What is weird is that you can’t kill the container(s) !

That was all bad analysis. It was over-tired rambling as I tried to make sense of my notes taken over 48 hours. At that time I assumed that runaway downloads started with my known error, why could I not kill the error and get behind it and start again?, I was asking myself!

I was expressing that I could not end one little error from the recent past. I was laying blame on Docker always “restarting where it left off with my error, attributing local_manifest”.

In light of you receiving an enormous download, I see that you are correct and better analysis is required!

overlay storage

I misrepresented this too. Seeing that overlay2 probably has a backup function, I guess I had the sense that this was holding my error, contributing to my woes. This was just part of the same bad analysis!

Just to be clear on this point, the overlay2 size (on rootfs) does seem to be stable, what I describe as runaway downloads only occurs in my e-partition.

One thing to cross check with you - the repo command.

Immediately before I ran into problems I followed this (seen in the output from my standard build command):

A new version of repo (2.15) is available.
You should upgrade soon:
cp /srv/src/Q/.repo/repo/repo /usr/local/bin/repo

I simply did exactly that, no reported error.

So now I am investigating that step, I see repo was present but not installed:

:~$ which repo

:~$ repo -v
error: repo is not installed.  Use "repo init" to install it here.

:/usr/local/bin$ sudo repo init
Downloading Repo source from
remote: Counting objects: 1, done
remote: Finding sources: 100% (38/38)
remote: Total 38 (delta 15), reused 38 (delta 15)
Unpacking objects: 100% (38/38), done.
repo: Updating release signing keys to keyset ver 2.3
fatal: manifest url is required.

:/usr/local/bin$ repo --version
repo version v2.16.3
       (tracking refs/heads/stable)
       (Tue, 20 Jul 2021 23:26:01 +0000)
repo launcher version 2.15
       (from /usr/local/bin/repo)
       (currently at 2.16.3)
repo User-Agent git-repo/2.16.3 (Linux) git/2.20.1 Python/3.7.3
git 2.20.1
git User-Agent git/2.20.1 (Linux) git-repo/2.16.3
Python 3.7.3 (default, Jan 22 2021, 20:04:44) 
[GCC 8.3.0]
OS Linux 4.19.0-16-amd64 (#1 SMP Debian 4.19.181-1 (2021-03-19))
CPU x86_64 (unknown)
Bug reports:

As the change in repo was the last odd thing to occur, it would be interesting to know if you got this repo command message and what action you took?

Regarding my new build, I enjoy a download rate of 3G per hour, so my current download of 207G has taken approx 3 days, downloading, but still no build started. Report to follow.

Here is a report on a build which I consider is showing signs of “runaway download” in that I had downloaded/synced 207G and no build had started.

I think that the main thing to jump out at me is

Branch name v0.17-q is a tag on e/os/releases, prefix with refs/tags/ for 'repo init'

as it seems directly related to my lack of understanding of the status of the repo command, mentioned in my last post.

Define my setup

Debian Buster May 2021 update && upgrade to ‘10.9’ (upgrade today to 10.10)

Bare metal install on 456GiB hard drive, ext4 partitions thoughout:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        28G   13G   14G  49% /
/dev/sda2		  3G          3G      linux-swap
/dev/sda3        55G   23G   29G  45% /home
/dev/sda4       373G  208G  146G  59% /rump

/dev/sda4 was reformatted immediately before this build attempt started.
Sizes shown are seen at end of this report.

Launch command

:/rump/srv$ docker run \
-v "/rump/srv/e/src:/srv/src" \
-v "/rump/srv/e/zips:/srv/zips" \
-v "/rump/srv/e/logs:/srv/logs" \
-v "/rump/srv/e/ccache:/srv/ccache" \
-e "BRANCH_NAME=v0.17-q" \
-e "DEVICE_LIST=a3xelte" \
-e "REPO=" \ \

Build trace

Set cache size limit to 50.0 GB
>> [Thu Jul 29 19:08:45 UTC 2021] Branch:  v0.17-q
>> [Thu Jul 29 19:08:45 UTC 2021] Devices: a3xelte,
>> [Thu Jul 29 19:08:45 UTC 2021] (Re)initializing branch repository
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 32964    0 32964    0     0  62313      0 --:--:-- --:--:-- --:--:-- 62196
Branch name v0.17-q is a tag on e/os/releases, prefix with refs/tags/ for 'repo init'
>> [Thu Jul 29 19:09:07 UTC 2021] Copying '/srv/local_manifests/*.xml' to '.repo/local_manifests/'
>> [Thu Jul 29 19:09:09 UTC 2021] Syncing branch repository

Stopping process

The process was stopped arbitarilly when I had downloaded 207G and no build had started. Evidence is .repo is the only folder in Q/

:/rump/srv/e/logs$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        28G   13G   14G  49% /
/dev/sda3        55G   23G   29G  45% /home
/dev/sda4       373G  207G  147G  59% /rump
overlay          28G   13G   14G  49% /var/lib/docker/overlay2/a07be1767f8cddafc43bc3697e73ce450fbfb33b99bca385d9de691b60587867/merged

:/rump/srv/e/logs$ docker ps
CONTAINER ID   IMAGE                                                                  COMMAND                  CREATED      STATUS      PORTS     NAMES
6530c126af9e   "/bin/sh -c /root/in…"   2 days ago   Up 2 days             festive_poitras

:/rump/srv/e/logs$ sudo du -sxh /var/lib/docker/overlay2/
1.8G    /var/lib/docker/overlay2/

:~$ docker stop 6530c126af9e

Inspect build and sizes

No build. Nothing, no folder, no log, in zips/

:/rump/srv/e$ ls
ccache  logs  src  zips

:/rump/srv/e$ sudo du -sBG src/*
208G    src/Q

:/rump/srv/e$ du -sBG src/ *
208G    src/
1G      ccache
1G      logs
1G      zips

:/rump/srv/e/src/Q$ ls -a
.  ..  .repo

:/rump/srv/e/src/Q$ du -sBG .repo
208G    .repo

:/rump/srv/e/src/Q/.repo$ du -sBG *
1G      local_manifests
1G      manifests
1G      manifests.git
1G      manifest.xml
206G    project-objects
2G      projects
1G      repo

:/rump/srv/e/src/Q/.repo/project-objects$ du -sBG *
1G      device
16G     e
1G      kernel
9G      LineageOS
159G    platform
5G      The-Muppets
19G     TheMuppets
1G      toolchain

:/rump/srv/e/src/Q/.repo/project-objects/platform$ du -sBG *
1G      build
4G      cts.git
1G      dalvik.git
1G      developers
20G     external
3G      frameworks
1G      hardware
1G      libnativehelper.git
1G      packages
1G      pdk.git
122G    prebuilts
1G      sdk.git
1G      system
1G      test
11G     tools

:/rump/srv/e/src/Q/.repo/project-objects/platform/prebuilts$ du -sBG *
1G      abi-dumps
11G     android-emulator.git
1G      asuite.git
1G      bundletool.git
1G      checkcolor.git
1G      checkstyle.git
19G     clang
2G      clang-tools.git
1G      devtools.git
1G      fuchsia_sdk.git
2G      gcc
1G      gdb
10G     go
12G     gradle-plugin.git
12G     jdk
1G      ktlint.git
1G      manifest-merger.git
2G      maven_repo
3G      misc.git
13G     ndk.git
1G      python
13G     qemu-kernel.git
13G     sdk.git
15G     tools.git
1G      vndk

for some ballpark figures: an average individual android release branch in 2021 is ~50G, not counting vendor repositories and binaries. Repo wraps git to manage groups of repositories. If 50G are fetched into .repo, checking it out (“switching to the tag/branch” / unpacking) from the git repo index to the working dir will be another 50G, so expect at least 100G per release tag before the build even starts. Full aosp git mirror (all branches) is currently at 550G (source).

I haven’t used the docker-lineage-cicd image yet but will look for disk space issues when I do.


I have the same message for the outdated repo, since months (or years ?). We can ignore it, previous building were ok with that version :slight_smile:

The repo binary, along with many other utilities, is being downloaded by the scripts in Docker image, you will find it at /rump/src/Q/.repo/repo/ . So, no need to have it installed to your “host” system.
It is also included to Docker image when building it, at /usr/local/bin/repo inside Docker image filesystem.

At this point, I think you may find useful to know how it works (disclaimer : I’m not 100% sure that /e/ Gitlab reflects the exact content of current Docker image, it’s more for information purposes) :

If you want to know what’s really into your Docker image, you can run :

  • docker image ls (copy the image ID)
  • docker image save xxxxxx -o e.tar

If you want to interact with a running container (warning !) you can use :

  • docker container ps (copy the image ID)
  • docker exec -it xxxxxxxx /bin/sh

Thanks for pointing me out to the .repo/project-objects/platform/prebuilts dir !
I completely missed it :crazy_face:

I have quite similar sizes, with some differences.
I’ll try to relaunch with v0.17-q & a3xelte, so we could compare …

Everything else seems absolutely normal to me :slight_smile:

Thanks ! :+1:

Any help welcome :slight_smile:

What I couldn’t figure out is : what changed since last month ?
I made a Q build for perseus 4 weeks ago, ran fine with my 350GB data disk …

I could pin down the date a little - the date of my OP! :slight_smile:

My notes show me tinkering with the a3xelte build at Friday July 23 20.18 UTC where I successfully ran the build with -j1 --fail-fast receiving logical output.

On Saturday 24 July 19.50 UTC I started a build which overwhelmed the partition (after best estimate 32 hours syncing) and resulted in me formatting the partition.

My current build is running overnight, synced size recorded as 244G.

Thanks for this valuable info ! :slight_smile:

My attempt for a3xelte & v0.17-q also failed yesterday evening :frowning:

Meanwhile, I’m tracking down any change for the most-consuming dirs in src/Q/.repo/project-objects/platform/prebuilts.
So far, no luck with clang or tools.git : according to, AOSP tag android-10.0.0_r41 is used to fetch sources from Google, and files didn’t change for a while (for example :

So bad I didn’t keep the source tree for my last successful build, shame on me :hot_face:

Thank you very much for this attempt.

Was this a correctly reported fail?

Or was this a fail due to the drive/partition is overfull?