Is it possible to make Q builds using Docker?

The issue is marked internal so will not be visible to all users. I will update the status here on this thread on a regular basis. Feel free to check if I miss out on the updating part.

Hi @Manoj
Any news on this? English golf courses are closing again on Thursday :frowning: so I’m going to have some spare time to make builds. I really would like to try making some Q builds with Docker if that’s going to be possible :slight_smile:

There is a fix for the issue ( made by one of our dev team members) which needs to be merged. Now I need to get the team to add the code into the main code base…working on convincing them to merge it in so that the Docker build works for all users.

1 Like

No updates on this…team is more busy with other tasks.

It works fine now. I built already for several devices with local manifests.

What’s your issue? I can share my build scripts if you want.

Thanks for your offer.

I was unable to build with Docker as described in this post above. I know that many people are building ‘the old fashioned way’ (i.e. without Docker, installing all the build tools and scripts necessary) but, for a number of reasons, that isn’t an option for me.

As I understand it, from @Manoj’s posts above, there is a problem with the Docker image: the problem is know, and fix is available, but the developer team has not had time to merge the fix, so the publicly available Docker image is still broken.

If you are building with that Docker image, then the fix has clearly been merged and I can start making builds. But, as you mention ‘build scripts’ I suspect you are not building with Docker, and I’ll have to wait :frowning:

Hi, I am building with Docker, and the only issue I faced was that I did not have enough RAM for some super intensive build lines (eg. Metalava).

What was broken for the community docker image was how some manifests were populated automatically for official LOS devices.

But if you build with local manifests, that’s all good.

Basically, my script is just a .sh file that I launch. Content is as below.

#!/bin/bash
sudo docker pull registry.gitlab.e.foundation:5000/e/os/docker-lineage-cicd:community
sudo docker run
-v “/media/HDD_3To/Android/eOS/src:/srv/src”
-v “/media/HDD_3To/Android/eOS/zips:/srv/zips”
-v “/media/HDD_3To/Android/eOS/logs:/srv/logs”
-v “/media/Donnees/ccache:/srv/ccache”
-v “/media/HDD_3To/Android/local_manifests/q:/srv/local_manifests”
-e “BRANCH_NAME=v1-q”
-e “DEVICE_LIST=oneplus3”
-e “INCLUDE_PROPRIETARY=false”
-e “SIGNATURE_SPOOFING=restricted”
-e “OTA_URL=https://yourOTAserverurlifany”
-e “REPO=https://gitlab.e.foundation/e/os/releases.git”
-e “CCACHE_SIZE=30G”
-e “ANDROID_JACK_VM_ARGS=-Dfile.encoding=UTF-8 -XX:+TieredCompilation -Xmx10G”
registry.gitlab.e.foundation:5000/e/os/docker-lineage-cicd:community

PS: I am the french user one mentioned earlier, discussing with Romain.

Makes sense: I was trying to build for an officially supported device, to prove the build process and environment worked, before building for an unsupported device.

Most of the unofficial builds I have made (for Sony devices stuck on Nougat) do not use local manifests: I assume they worked with the automatically generated ones. I guess they are unlikely to work if no change has been pushed to the community Docker image.

I may try and have a play building for the unsupported device that interests me, Sony Xperia XZ1 Compact (lilac).

Thanks for your help

1 Like

You’re welcome.

Fix is here.

Let me ask Thilo if his fix has been pushed to the community image.

I guess the answer is “Not yet” :slight_smile:

You guess right… He doesn’t know. :slight_smile:

But local_manifests methods works fine.

Sadly, not for me :frowning:
I’ve tried to build with a local manifest, but I’m getting the same error:

>> [Thu Nov 12 17:16:35 UTC 2020] Copying '/srv/local_manifests/*.xml' to '.repo/local_manifests/'
Traceback (most recent call last):
  File "/root/build_manifest.py", line 61, in <module>
    f.write(xmlstr)
TypeError: write() argument must be str, not bytes

My build command (pretty much copied from your script, but modified for m own setup) is

sudo docker run \
-v "/srv/e/src:/srv/src" \
-v "/srv/e/zips:/srv/zips" \
-v "/srv/e/logs:/srv/logs" \
-v "/srv/e/manifests:/srv/local_manifests" \
-v "/srv/e/userscripts:/srv/userscripts" \
-v "/srv/e/ccache:/srv/ccache" \
-e "ANDROID_JACK_VM_ARGS=-Dfile.encoding=UTF-8  -XX:+TieredCompilation -Xmx10G" \
-e “INCLUDE_PROPRIETARY=false” \
-e “SIGNATURE_SPOOFING=restricted” \
-e "CCACHE_SIZE=200G" \
-e “BRANCH_NAME=v0.12.10-q” \
-e "DEVICE_LIST=lilac" \
-e "REPO=https://gitlab.e.foundation/e/os/releases.git" \
registry.gitlab.e.foundation:5000/e/os/docker-lineage-cicd:community

My roomservice.xml contains

<?xml version="1.0" encoding="UTF-8"?>
<manifest>

    <!-- SONY -->
    <project name="whatawurst/android_kernel_sony_msm8998" path="kernel/sony/msm8998" remote="github" revision="lineage-17.1" />
    <project name="whatawurst/android_device_sony_yoshino-common" path="device/sony/yoshino-common" remote="github" revision="lineage-17.1" />
    <project name="whatawurst/android_device_sony_lilac" path="device/sony/lilac" remote="github" revision="lineage-17.1" />

    <!-- Pinned blobs for lilac -->
    <project name="whatawurst/android_vendor_sony_lilac" path="vendor/sony/lilac" remote="github" revision="lineage-17.1" />

</manifest>

The compute instance I am using is running Ubuntu 18.04 which is what I have used for previous successful nougat and pie builds.

I also tried with `-e “BRANCH_NAME=v1-q”, and without the ANDROID_JACK_VM arguments

I bet the issue comes from the way you set-up your volumes.

-v "/srv/e/src:/srv/src" \

The above is wrong.

-v "/hereyoushouldputhterealpathtothe/e/src:/srv/src" \

Basically, if your build directory is on /users/home/peter/e/src, then you should write it all, starting from the root of your system.

See for me, it’s /media/HDD_3To/etc…

src

No - it’s correct. I’m not using my own machine,I’m using a remote compute instance at OvH, with a big volume mounted at /srv. All the build directories exist there. This setup has worked fine in the past.

Ah… Maybe someone else could help then…

Hi @Manoj
Can you give any idea when this fix will be made available in the community Docker image?
Thanks

The check in is getting delayed because the team is focusing on resolving the issue we faced with the previous set of builds. That issue is almost resolved…the testing is progressing smoothly. The expectation is by the coming week we should first be able to release all dev and upgrade builds and then the team should be able to concentrate on merging this and a number of other pending MR’s … I have about 10 MR’s pending for merge so believe me when I say I am waiting for this backlog to clear :crossed_fingers:

OK thanks @Manoj . I’ll find something else to do until then :slight_smile:

The fixed build of Docker seems to be working as designed now (though it’s not yet in the main community branch

Thanks

Thanks for the update. We will move it to the community branch now that it is verified :slight_smile:

Update: The image has been moved to the community image and the below should now work

docker pull registry.gitlab.e.foundation:5000/e/os/docker-lineage-cicd:community
1 Like