WWDAACC20 全球開發者反蘋果審查大會

在開發者社群關注蘋果的 WWDC 同時,人權團體將在 6/22 舉行全球開發者反蘋果審查大會 Worldwide Developers Against Apple Censorship Conference ,邀請圖博(西藏)、東突厥斯坦(新疆)、香港等地的人權團體和國際研究機構、開源軟體開發者,一起來探討蘋果在中國、香港等地以「遵循當地法規」為由,協助執政當局實行言論審查和破壞言論自由的政策。

以下是個人看法

我寫這篇文章只是希望更多人知道:

  1. 蘋果正以「遵從當地法規」為藉口協助極權政府侵害人權。
  2. 蘋果在企業社會責任的人權議題上透明度極差。

或許是因為蘋果一向把公司資訊控制的很好,內部資訊無論是政策還是產品都很少外流,宣傳手腕也比 Facebook 等公司高明,所以除了「蘋果協助極權政府限制人權」這個大方向、長久以來的批評之外,在大方向下面的小新聞比 Google, Facebook 等公司少很多。 Facebook 三不五時就會爆出醜聞,Google 三不五時就會有員工出來遊行抗議,蘋果在人權方面受到輿論的壓力是比 Facebook, Google 小很多的。

而且就我所知,蘋果對於協助極權政府限制人權,除了「我們遵從當地法規」,就沒有講其他東西了,而且極少回應外界的批評。

像是 Facebook 這公司,如果被輿論批評的很嚴重的話,八成會出個新聞稿說 XXX 是我們一向重視的理念,我們的社群守則也已經涵蓋這方面,我們會增加社群守則執行人員的培訓以確保更好地處理這些議題巴拉巴拉。雖然很爛但至少有回應,要蘋果出來回應的輿論嚴重程度得要到很高很高才有可能的。

其實蘋果侵犯人權的程度說不定比 Google Facebook 高很多,因為蘋果的不透明度高很多,沒人知道也沒人爆料實際上發生了什麼事,也就沒什麼人關心,當作一切都好,真的是很成功的公關策略。

如果簡單地去看蘋果對人權議題的回應度,已經比 Google Facebook 糟很多。聯合國商業與人權中心的企業回應比率評比,蘋果只有 31% ,Facebook 有 68%,Google 有 71%

「遵從當地法規」不是萬用藉口

很多人可以接受蘋果以「遵從當地法規」為由在中國和香港幫助中共實行言論審查,「畢竟生意還是要做嘛」。

我想指出,「遵從當地法規」並不是做生意的鐵則,它是有程度之分,而且是有彈性可以改變的。比如說,顯然極少有公司想要遵從北韓的當地法規去北韓做生意。或是,假如中共今天規定蘋果必須要竊聽全中國的蘋果手機然後找出不良份子,這在執行上顯然有各種困難,所以蘋果不可能會照做。

既然遵從當地法規是有改變可能的,那作為消費者和公民,還有什麼理由要去支持這些企業「遵從當地法規」侵害人權?

不過即使是這樣,我也不是在說買蘋果產品的人不道德啦,畢竟買產品本來就有很多考量。我只是不解,為什麼有人要幫企業侵害人權說話?講難聽點,那公司又沒付你錢,幫公司說話就交給他們領薪水的公關部門就好了。我覺得,如果至少,大家可以認可到這間公司有些有害於社會的行為,並且應該改善,這就比現狀好很多了。

相關

https://applecensorship.com/ 可以比較不同國家 app store 的搜尋結果,可以看到某些應用程式在特定國家看不到。

我的 Android 10 模改、工具、設定

最近成功把手機從 LineageOS 15.1 無痛升級到 17.1 (沒刪 /data),決定記錄一下過程、在新系統上面使用的工具和安裝方法,一方面避免自己以後需要重做這個流程,另一方面也當作分享和推廣其他人使用 LineageOS。

背景 – LineageOS 15.1

應該是從我買了這隻 Galaxy S7 我就在用 LineageOS 15.1 了,當時似乎已經有 LineageOS 16 ,但因為 Xposed 支援停在 Android 8.1 ,所以我就決定先不升 LineageOS 16。

我之前有在 COSCUP 分享過《加強 Android 隱私的工具和技巧》,裡面有提到,對我來說我覺得 XPrivacyLua 是我不能不用的模組,然後他需要 Xposed ,所以為了他我就一直留在 LineageOS 15.1。

後來我那隻手機型號的 UNOFFICIAL LineageOS 開發者轉移重心到 LOS 16 ,不再更新他的 LOS 15.1 build ,所以為了拿到 Android Security Bulletin,後期我只好自己 build ,還好其實沒很難。

後來知道有厲害的中國開發者做了 EdXposed ,配合 Magisk 和 Riru-core 就可以繼續在 Android 8 以上提供 Xposed 原本提供的系統功能。

最近 LineageOS 17.1 出了,我也找到時間,打算先來測試 LOS 17.1 + EdXposed + XPrivacyLua 的穩定性,如果夠穩定的話就可以升級啦。

Build ROM

Galaxy S7 的 LOS UNOFFICIAL 開發者還蠻認真,更新的很勤勞,幾乎每個月都會出 LOS 17 新的 build 。不過我想說既然我都可以自己 build ,何必用別人編譯的東西。

自己 build 也可以順便確認一下這個開發者寫的 code 的品質怎麼樣,我之前就發現我另一臺手機的 LOS UNOFFICIAL 開發者不是很認真,去他的 repo 看, commit message 根本亂寫,然後還不太會用 git revert ,看他經常在 revert 自己的 revert ,然後有的 revert 還不是真的 revert ,是手動把 code 加回來……

他不會用 git 就算了,我嘗試下載他的原始碼來編譯,居然還編不起來。我跑去聊天群組裡面問他,他就修了修 push 了幾個新 commit,最後我終於 build 起來。但是神奇的事情又發生了,我刷他的 build 就可以開機,我自己的 build 就開不了機 (bootloop) ,嘗試 debug 了許久沒有成果最後逼不得已還是只能用他的 build ,感覺他東西沒完整開源。(不過感覺應該是沒有惡意啦,可能只是不會用 git)(不過後來又發現他只是個法國高中生,不會用 git 就可以 port Android ROM 也是蠻厲害)……另一隻手機的故事扯遠了。

不同裝置 build LOS 的流程幾乎完全一樣,不過對於 UNOFFICIAL 裝置需要的額外動作就是要先抓 local_manifests 的 XML 放到 .repo/local_manifests/roomservice.xml ,然後才繼續 repo init, repo sync,繼續照官方文件的流程操作。

官方文件沒提到的是 LineageOS 17.1 build 需要什麼版本的 OpenJDK,我也沒查到資料,我就隨便猜,先用 OpenJDK 1.9 ,最後 build 成功,證實的確是要用 1.9 。

另一件事情是我原本希望 LOS 15.1 和 17.1 可以共用一個 source tree ,想說既然是 git 應該可以單純 checkout 不同的 branch 成 working copy 就可以,但後來查到資料是說不行,只好分開(找不到資料來源了)。

另外,完成了第一次 repo sync 之後,想要再更新 source tree (用 repo sync)的時候可能會遇到問題,這時候就直接強迫他覆蓋就好: repo sync --force-sync

自己 build 完之後建議用自己的 key 來 sign 這個 build ,官方文件有說明

安裝 ROM 和 OpenGApps

安裝之前當然記得先做個 Nandroid backup 。

安裝就是很普通的進 recovery 按 install zip ,安裝完 ROM 之後再裝 OpenGApps。

我原本有點擔心,我沒有清除 /data ,不知道這樣直接升級可不可以(而且我還是一次跳兩個大版本),怕有些應用程式會不相容。

升級完,第一次開機證實:不會!原本的應用程式都繼續開心地正常運作。

安裝 Magisk, Riru-core, EdXposed

就直接依照 EdXposed 的說明,依序安裝 Magisk, Riru-core 和 EdXposed 就可以了。

測試 XprivacyLua

根據過去在 LOS 15.1 的經驗,有不少應用程式在受到 XprivacyLua 的限制之後會沒辦法打開。這似乎不太像是應用程式呼叫 API 拿到假資料的時候崩潰,因為我曾經遇過好幾次的神祕狀況是,原本一個受到 XprivacyLua 限制可以正常開啟的應用程式突然打不開(打開馬上當掉),然後我解除了一些 XprivacyLua 限制之後他就可以打開了,實際上看 XprivacyLua 的限制紀錄裡面也沒有顯示應用程式會使用那個打開的限制。並且在 logcat 裡面會有一些看起來跟 ART 有關的 native error (會產生 tombstone 的那種)。我猜測大概是因為 Xposed 在 ART 的 hook 沒有很穩定吧。

所以根據上面的經驗,升級之後我得先測試一下我的應用程式受到 XprivacyLua 限制的時候的穩定性。

測試結果,沒想到, LOS 17.1 + EdXposed + XprivacyLua 的組合,居然比在 LOS 15.1 之下更穩定!

有不少在 LOS 15.1 經常 crash 的應用程式,在 17.1 下,完全不會 crash。

而且,為了穩定性,我在 15.1 的 XprivacyLua 設定裡面經常必須關閉一些限制;在 17.1 ,我把這些限制都重新打開,應用程式竟然還是可以穩定運行!

所以簡單來說在 LOS 17.1 的 XprivacyLua 真的可以想限制什麼 API 就限制什麼,開到爽都不會當機。

不過!有件事情變糟,就是雖然當機的機率降低,但應用程式崩潰的影響似乎變大。在 15.1 應用程式因為 XprivacyLua 限制而當機的時候只會影響單一應用程式(強制關閉),但在 17.1 因為 XprivacyLua 當機的時候經常會直接讓整個系統 reset 重啟。但這原因我也不是很清楚, logcat 看不出所以然。

但總之整體來說我覺得比 LOS 15.1 穩定,所以決定繼續使用 17.1 。

LOS 17.1 設定微調

一開始我注意到電池消耗的速度似乎變快了,不過裝了 BetterBatteryStats 實測了一下耗電速度,又覺得還好,螢幕開著持續使用的狀況下大概 10 分鐘消耗 4% ,螢幕鎖定之後幾乎是完全不耗電,偶爾拿出來使用的話可以輕鬆撐過一整天(12 小時)。

Android 10 換了一個應用程式切換器,從原本應用程式會在畫面上垂直堆疊切換,現在變成一次只顯示一個應用程式,還要水平切換,真的很難用。原本的只要看到對的標題點下去就切換了,新的要滑一下才會顯示上下一個應用程式。網路上查了一下,換回垂直切換器目前似乎沒有解法,只有非原生 launcher 的用戶想要使用原生的切換器,但 Google 不開放權限所以有人做了模組

原本在 Android 8 使用的 GCam 版本到 10 也變得不太穩定,試了幾個之後我覺得 Mod8.3b_IDan_v4.7_test2_fixlos17.apk 這個版本最好。安裝之後要打開設定頁,然後重開 app ,設定才會套用,才能開始拍照。

Websocket 可以拿來掃本地連接埠,順便抱怨 Web{*} 垃圾

今天 Hacker News 上面很紅的一篇是有人發現 eBay 會用 websocket 來掃使用者的 local port:https://nullsweep.com/why-is-this-website-port-scanning-me/

然後就有人發了另一篇測試 websocket 掃本地連接埠的技術探討和抱怨 Web{*} 這些垃圾,像是 WebVR/WebGL/WebAudio/WebBluetooth/WebUSB 這些東東: https://medium.com/@bjt2n3904/in-protest-of-the-web-bulldozer-e0fce4567d26

第二篇的意見我是蠻支持的,我之前就有抱怨過

有幾個常見支持 Web* 垃圾的論點是:

一、如果不把這些東西標準化,會變成各家廠商自己使用互不相容的標準,降低網頁技術的可攜性。降低可攜性之後就會產生以前 IE 壟斷時代的各種問題,像是開發者要多做很多工,因此放棄支援比較少人使用的瀏覽器之類的。

不過這裡更深層的問題應該是什麼東西要納入 W3C 的標準,直覺的想法應該是要平衡開發者的需求和使用者的福祉,開發者通常希望網頁技術越多功能越好,這樣他們就可以作出更多功能的網頁應用程式來吸引使用者;使用者則通常是弱勢的一方,對 Web 不一定有什麼願景,只是等開發者作出東西來給他們用,喜歡就用不喜歡就不用,但這種狀況到後來通常會變成使用者不得不用(例如因為網頁版「功能完整」切換到網頁版而停用原生桌面應用程式),所以其實還是蠻單方控制的生態系。

然後 W3C 和 Mozilla 這類理論上應該要平衡雙方、多幫使用者謀求福祉的組織,似乎也只是很簡單地覺得 Web 功能越多對使用者越好,這想法嚴格來說也沒有錯,大概就是和我對於「使用者福祉」的想法不太一樣。

粗略而言,我覺得使用者應該要更主動出力去型塑他們理想的 Web ,跟民主政治很類似,而這需要一個願景作為出發點,但當然大多數使用者是不會有什麼願景的。所以在協助使用者辨明願景這方面 Mozilla 倒是做了很多功夫。

二、Web 應該儘可能實作並且預設啟用更多功能,這樣可以持續強化 Web 作為應用程式和內容的發佈管道的通用和無所不在的特性(我想說的是 “universal"),相對於封閉和不跨平臺的應用程式和內容發佈管道,例如 Play Store 和 iTunes Store。

這和我的願景就不太一樣,我覺得應該是要設法教育使用者讓他們有能力自己去決定要啟用哪一些功能(同時間接表態「我想要透過 Web 使用哪一些應用程式」、「哪些應用程式我不想看到」,這也是對 Web 的願景了),所以大部分的功能(WebGL 之類的)應該是要預設關閉,然後在充分告知的情況下由使用者自己啟用。

但這樣的話對於開發者來說 Web 作為一個發佈管道的吸引力就降低很多,因為他們想要塞給使用者的功能(和夾帶的賺錢機會)使用者不一定買單。

總之

Web{*} 這些標準在瀏覽器裡面預設啟用,我覺得並沒有充分考量使用者的福祉。有能力的使用者是可以用我前一篇講到的方式停用一些功能,但這並沒有解決大環境下使用者福祉被忽略的問題。

Migrate Android ROM build signing keys without losing /data (UNOFFICIAL to a different UNOFFICIAL key)

I did encounter some problem in the process and dug in a little bit to learn about how the system works, this is a note from my learning.

Background

LineageOS documents the steps you need to take when you migrate from one build signed with unofficial/testkey to official build, or vice versa. But What to do if I’m migrating from UNOFFICIAL KEY A to another UNOFFICIAL KEY B?

By looking at migration.sh I realized that it’s just doing some search and replace in /data/system/packages.xml . So the answer to my own problem is simple, just change the script so that it replaces KEY A with KEY B.

But… I didn’t succeed in the first time, so I dug in the files and code to figure out why. It turns out I just pushed the wrong migration.sh to the device… but I still decided to note down my learning.

/data/system/packages.xml

This is basically the database of PackageManager. On boot PackageManager will check the signatures in existing APK files against this database, if they don’t match, data and permission will not be granted. (Also refer to source code at https://android.googlesource.com/platform/frameworks/base/+/ac9da77/services/core/java/com/android/server/pm/PackageManagerService.java )

sharedUser

Not only each apk is signed, system users also have associated certificates. See https://nelenkov.blogspot.com/2013/05/code-signing-in-androids-security-model.html

Decoding certificates

I wrote a script in order to decode the encoded certificate strings in packages.xml: https://gist.github.com/pellaeon/66a6acc2ea8752bba74103bbb81fdf65 . It takes packages.xml file path as argument and print out information of the certificates that the packages.xml contains.

Other useful information

Updating Linux iwlwifi firmware

The Intel Corporation Wireless 8265 / 8275 Wifi adapter on my Dell Latitude 7490 was disconnecting once every few minutes. And after a few reconnections the interface simply disappeared from ifconfig. This is too annoying so I decided to trace down the issue.

This is a note by me, not knowing much about the kernel, debugging the issue.

Firmware bug

Searching for kernel log messages quickly lead me to this kernel bug: https://bugzilla.kernel.org/show_bug.cgi?id=203775 . The bug was resolved in 2019 June. So the next thing would be to check if my distribution includes the fix. According to this stackexchange answer, I will be able to see which firmware (ucode) file was loaded, and the firmware version in dmesg, but for me it only showed:

kernel: [525967.408047] iwlwifi 0000:02:00.0: Loaded firmware version: 36.9f0a2d68.0

Luckily in a duplicated bug report the developer said that 36.9f0a2d68.0 is not the latest version.

So I had to update the firmware, but how?

Updating iwlwifi firmware — look for loaded ucode file

I wasn’t able to figure out which ucode file was loaded above, but I found another bug report by the same person: https://bugzilla.kernel.org/show_bug.cgi?id=203055 , so it seems that for my adapter, it either uses iwlwifi-8000C-36.ucode or iwlwifi-8265-36.ucode. This is narrow enough and I should be able to just update these 2 files.

Updating iwlwifi firmware — actually updating

On Ubuntu the 2 ucode files are included in linux-firmware package. After googling I couldn’t find any canonical ways to update it. Non-canonical ways would be to either download, build, and install linux-firmware from the upstream master branch, or to install a package from newer version of Ubuntu. Both of these methods will update unrelated firmware in the process, which seemed dangerous (the updated firmwares might not work with my non-mainline Ubuntu kernel version).

More googling says that I should be able to manually download the ucode files, copy it to /lib/firmware and it will load.

And turns out it does. After reboot I can see new firmware is loaded:

[   38.969834] iwlwifi 0000:02:00.0: loaded firmware version 36.77d01142.0 op_mode iwlmvm

Side note – TDLS

This bug is triggered upon a TDLS connection. According to Wikipedia it is “a seamless way to stream media and other data faster between devices already on the same Wi-Fi network." Devices using it communicate directly with one another, without involving the wireless network’s router.

So I searched for “TDLS" in syslog:

Apr  8 00:47:25 X wpa_supplicant[1262]: TDLS: Creating peer entry for xx:xx:xx:xx:xx:xx                                                                                   
Apr  8 00:47:25 X wpa_supplicant[1262]: TDLS: Dialog Token in TPK M1 1                                                                                                    
Apr  8 00:47:25 X kernel: [525967.407778] iwlwifi 0000:02:00.0: Microcode SW error detected.  Restarting 0x82000000.                                                      
Apr  8 00:47:25 X kernel: [525967.408032] iwlwifi 0000:02:00.0: Start IWL Error Log Dump:                                                                                 

Apparently, upon receiving a TDLS connection setup request, iwlwifi resetted. So TDLS is definitely the issue here.

Looking for the MAC address, it is my other Samsung Android device. The reason it was setting up TDLS is probably because I configured the Android to use Burp proxy on my computer.

TDLS seemed quite interesting, and you can set it up manually: https://superuser.com/questions/1335457/how-can-i-use-tdls-802-11z-for-ssh-connections-on-my-lan

What my broken Synology NAS taught me about mdadm , LVM and testdisk

The fail

I was trying to set up SSL client certificate authentication on nginx. In the config nginx needs to read the client certificate CA from custom file location. Then I found out nginx on Synology DSM wouldn’t start because of AppArmor. I tried to modify the apparmor profile of usr.bin.nginx but can only find cache of the profile under /etc/apparmor.d/cache. So I had to completely disable apparmor for nginx, by removing the cache file.

Then it’s some weeks of trial and error to get it working on nginx. In the end I couldn’t get it to work. (I forgot why.)

The problem came when I tried to enable apparmor for nginx by moving back the profile cache and reloading apparmor:

sudo synoservice --restart apparmor

I lost communication with it after this command. It does not respond to HTTP or SSH or any requests anymore. I unplugged it from power and plugged back, it couldn’t boot. The blinking blue light of death.

Disk structure

https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-86731

Operations

Below are some operations that I did, not in order, I only want to document how to achieve each individual task.

Mounting the data partition and system partition: https://xpenology.com/forum/topic/7004-tutorial-how-to-access-dsms-data-system-partitions/

On first mount of the system partition, it is readonly, I used the following command to make it readwrite:

sudo mdadm --readwrite /dev/md2

Since the data partition lives on a LVM LV inside a MD array, here is the correct procedure to remove the disk:

  1. umount the filesystem
  2. Deactivate the volume group: sudo vgchange -an vg1000 , if you have more than one VG named vg1000, vgchange will ask you to specify uuid, like this: vgchange -ay vg1000 --select vg_uuid=22aBQe-iiUm-XJEd-cjvF-S9qV-U26U-WpsG3p
  3. Use sudo dmsetup table to check the VG is actually disabled. (If disabled it won’t show up.)
  4. Stop DSM system and data MD arrays: sudo mdadm --stop /dev/md126 , sudo mdadm --stop /dev/md127
  5. Tell the kernel to remove the device: echo 1 > /sys/block/sde/device/delete (issue this as root) (reference)
  6. You should hear the disk power down, wait for it to power down completely, then you can safely unplug the disk.

View MD status:

cat /proc/mdstat

Repairing DSM (failed)

First after mounting the system partition, I took a backup using tar .

Then I slide in another disk to the NAS, ask it to install a new operating system on the disk. Then copied the fresh DSM system partition content into the old system partition. This didn’t work, it still wouldn’t boot.

I also tried the official HDD migration method. Didn’t work.

Then I tried powering on the NAS without a disk, and insert the original disk. Hoping it would detect it as an “migrated" data disk. It asked me if I want to install DSM on the disk, I thought, well, let’s see if it can install the system in existing system partition, and, usually, before any formatting is done, there is a warning. – This was a mistake, I clicked install and it immediately started to format the disk without warning. I immediately unplugged the power, hoping to save the data from being overwritten.

Testdisk

Now I got a disk with broken partition table. I had to recover files from it.

Directly asking testdisk to work on the whole disk device: testdisk /dev/sdb . It did find the MD and LV header, but did not find the ext4 partition inside the LV. I tried waiting on testdisk longer to do “deep search" to see if it finds the ext4 superblock. It didn’t. (I imagined the ext4 superblock should be near the LV header, but seems that it is not.)

Fortunately, I soon find that, the LV is still detected by my system. Not sure if it’s the original LV or the new one created. But I asked testdisk to work on the LV: testdisk /dev/vg1000/lv . And this time it quickly found the ext4 filesystem and the directory structure. The directory structure and content are still preserved.

With this I can start dumping the files using testdisk.

Finally

Don’t buy a product that you don’t know how it works. (And have bad documentation on how it works.)

If you ever buy such product, don’t use it in “unsupported" use cases, or try to customize it too much. (but sometimes how much is too much can be hard to tell)

If I need a NAS, buy Node 304 chassis, put ASRock J5005-ITX motherboard in it, and install FreeNAS.

LXC Simple Bridge Networking with existing iptables rules

Environment: RPi 3B+ , Raspbian 9

I set up LXC with simple bridge networking according to the instructions here: https://ubuntu.com/blog/converting-eth0-to-br0-and-getting-all-your-lxc-or-lxd-onto-your-lan .

Same instruction but other ways are documented as well: https://wiki.debian.org/LXC/SimpleBridge#Host_device_as_bridge

The problem I had was, after setting up, the container still couldn’t communicate with outside. I found this Q&A that taught me a nice way to test if it is firewall rules that is interfering:

echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables

This will make bridge networking bypass iptables.

After testing, simply change it back to 1.

Then on the second answer it describes my exact same problem.

So I installed iptables-persistent. Initially I couldn’t figure out the format of /etc/iptables/rules.v4. So I issued:

netfilter-persistent save

This will save the current rule into that file.

The next answer is exactly my issue. So I integrated their rule into mine and it worked:

*filter
-A FORWARD -o lxcbr0 -m comment --comment "allow packets to pass from lxd lan bridge" -j ACCEPT
-A FORWARD -i lxcbr0 -m comment --comment "allow input packets to pass to lxd lan bridge" -j ACCEPT
COMMIT

Different methods to intercept Android app SSL traffic (public version)

This is a record of my process trying to intercept SSL traffic on Android apps.

Obtaining APK files

Most tutorials would instruct you to download the APK file from third-party sites like APKpure and APKmirror, or install the APK first on your device then pulling it to the computer using adb pull. Here I use another way: manually downloading from Aurora Store. Aurora Store is an unofficial, FOSS client to Google’s Play Store. This gives some advantages:

  • Third-party APK sites might not have latest version of the APK.
  • Third-party APK sites might not have the APK variant (CPU architecture, screen resolution, locale, etc) that fits your device.
  • Aurora Store allows you to easily spoof device models and regions.

Steps:

  1. If you want to download apps only available in a specific region, connect to a VPN of that region.
  2. Select “Anonymous" when using Aurora Store
  3. Search for the app. The search result will contain apps available in that region.
  4. Go to the app info page.
  5. In the upper right 3-dot menu, select “Manual Download"
  6. The text field will be filled with the latest version code. If you want to download an older version you can change the code here. Note: a) old versions are not always available, b) every app has their own scheme naming the code, c) the code corresponds to android:versionCode property in AndroidManifest.
  7. Click “Download". This will download all split APKs (or Dynamic Delivery) that fits your device profile into Internal Storage/Aurora/.
  8. You don’t have to install the app, because we’re going to modify it.

Using rootless Xposed

Unmodified version of the app failed to run (black screen) on VirtualXposed.

TaiChi even failed to install the app. Taichi will ask to uninstall the app outside Taichi’s container, then after uninstallation nothing follows.

Modifying the APK

Modifying APKs usually involves these steps:

  • Decompile: apktool does a good job
  • Modify:
    • Source code: edit smali, I don’t need it here
    • Resources: AndroidManifest.xml and other XML files, turns out there are quite some difficulty with this part.
  • Rebuild package: apktool usually does a good job, but there seems to be some methods that an app developer can take to make apktool fail rebuilding.
  • Signing: no problem

network_security_config

Since Android 7, user supplied CAs are not trusted in apps anymore. Most guides (like this one) will suggest you to modify networkSecurityConfig in AndroidManifest to make the app trust user supplied CAs.

Unfortunately when I do this for some apps, apktool will fail to rebuild the app. There are many strange errors, some can be solved by instructing apktool to use aapt2, but I encountered many cases where even aapt2 wouldn’t even help. So I had to seek other ways.

AndroidManifest have a complex binary format. It is usually quite easy to decode binary AndroidManifest into readable XML text, in the process the decoder (apktool) will flatten special “binary pointers to external resource files" into text paths and handle other special structures to text. But when encoding text AndroidManifest into binary, there are ambiguities to external references, causing the encoding to fail. (I don’t fully understand this part.) Fortunately, for most apps, there is another way: mark the app debuggable.

debuggable

For many apps, the networkSecurityConfig is defined like this:

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
    <base-config cleartextTrafficPermitted="true">
        <trust-anchors>
            <certificates overridePins="true" src="system" />
        </trust-anchors>
    </base-config>
    <debug-overrides>
        <trust-anchors>
            <certificates src="user" />
        </trust-anchors>
    </debug-overrides>
</network-security-config>

This means that if the app is debuggable, it will accept user supplied CAs.

Modifying debuggable property in AndroidManifest.xml is a much smaller change than modifying networkSecurityConfig path. In the worse case I can just use a hex editor to flip a few bits. And since the change is minimal, apktool should be able to rebuild the app.

In order to modify debuggable property, I tried many tools:

But in the end I had success with Ele7enxxh‘s AmBinaryEditor. (Documentation: http://ele7enxxh.com/AndroidManifest-Binary-Editor.html )

I also wrote some shell scripts to help handle decoding, building and signing split APKs: https://github.com/pellaeon/AddSecurityExceptionAndroid

Full process

Assumption: you already have all split APKs stored in a directory apk1/

1. Decode all APKs:

~/projects/AddSecurityExceptionAndroid/splitApktool.sh decode apk1/
# Decoded apks are put under apk1_tmp/

2 . Binary edit AndroidManifest.xml using AmBinaryEditor:

cd apk1_tmp/xxxx.apk_unpack/
~/projects/AmBinaryEditor/bin/Release/ameditor attr --modify application -d 1 -n debuggable -t 18 -v true -i AndroidManifest.xml -o AndroidManifest.xml1
mv AndroidManifest.xml1 AndroidManifest.xml

3 . Build the APKs

~/projects/AddSecurityExceptionAndroid/splitApktool.sh build apk1/ # It will build from apk1_tmp/

4 . Install the split APKs onto the device

~/projects/AddSecurityExceptionAndroid/adbinstallsplitapk.sh apk1_new/

5 . Check if the package is installed as debuggable:

$ adb shell
j3y17lte:/ $ for p in $(pm list packages | cut -d : -f 2); do (run-as $p id >/dev/null 2>&1 && echo $p); done
com.xxx # If it shows the package id, you have success

Notes

# Uninstall APK using pm. Sometimes a package will not remove completely when you use the GUI, causing the installation to fail.

adb shell pm uninstall <com.xxx.packageid>

Forking AmBinaryEditor

During testing I fixed a few quirks of AmBinaryEditor, they are documented in the readme. https://github.com/pellaeon/AmBinaryEditor

For apps that doesn’t have debuggable property already defined

In the aforementioned scenario, debuggable property already exists in AndroidManifest.xml. But if it does not already exist, we need to add an attribute using AmBinaryEditor.

# WON'T WORK: Use this command to add an debuggable attribute to the application tag
~/projects/AmBinaryEditor/bin/Release/ameditor attr --add application -d 1 -n debuggable -r 16842767 -t 18 -v true -i AndroidManifest.xml -o AndroidManifest.xml1

Note: when adding attributes, we need to specify the resource id using -r and a decimal number. Refer to the android source code for system global resource ids. Resource id for debuggable is 0x0101000f , so in decimal it is 16842767.

Unfortunately the debuggable attribute would not be accepted when modified in this way. (It would still install fine but not debuggable.)

To solve this problem, I inspect the APK using aapt:

$ aapt list -v -a apk2.apk
[...SNIP]
    E: application (line=106)
      A: android:theme(0x01010000)=@0x7f12001e
      A: android:label(0x01010001)=@0x7f110bbe
      A: android:icon(0x01010002)=@0x7f0e0000
      A: android:name(0x01010003)="[REDACTED]"
      A: android:persistent(0x0101000d)=(type 0x12)0x0
      A: android:launchMode(0x0101001d)=(type 0x10)0x3
      A: android:alwaysRetainTaskState(0x01010203)=(type 0x12)0xffffffff
      A: android:allowBackup(0x01010280)=(type 0x12)0x0
      A: android:largeHeap(0x0101035a)=(type 0x12)0xffffffff
      A: android:supportsRtl(0x010103af)=(type 0x12)0xffffffff
      A: android:resizeableActivity(0x010104f6)=(type 0x12)0x0
      A: android:networkSecurityConfig(0x01010527)=@0x7f150003
      A: android:roundIcon(0x0101052c)=@0x7f0e0000
      A: android:appComponentFactory(0x0101057a)="android.support.v4.app.CoreComponentFactory" (Raw: "android.support.v4.app.CoreComponentFactory")
      A: android:isSplitRequired(0x01010591)=(type 0x12)0xffffffff
      A: android:debuggable(0x0101000f)=(type 0x12)0x1

Compared with aapt output from using ameditor attr --modify on another APK with existing debuggable attribute:

$ aapt list -v -a apk1.apk | less
[...SNIP]
    E: application (line=111)
      A: android:theme(0x01010000)=@0x7f12002e
      A: android:label(0x01010001)=@0x7f110c19
      A: android:icon(0x01010002)=@0x7f0e0000
      A: android:name(0x01010003)="[REDACTED]"
      A: android:persistent(0x0101000d)=(type 0x12)0x0
      A: android:debuggable(0x0101000f)=(type 0x12)0x1
      A: android:launchMode(0x0101001d)=(type 0x10)0x3
      A: android:alwaysRetainTaskState(0x01010203)=(type 0x12)0xffffffff
      A: android:allowBackup(0x01010280)=(type 0x12)0x0
      A: android:largeHeap(0x0101035a)=(type 0x12)0xffffffff
      A: android:supportsRtl(0x010103af)=(type 0x12)0xffffffff
      A: android:resizeableActivity(0x010104f6)=(type 0x12)0x0
      A: android:networkSecurityConfig(0x01010527)=@0x7f150003
      A: android:roundIcon(0x0101052c)=@0x7f0e0000
      A: android:appComponentFactory(0x0101057a)="android.support.v4.app.CoreComponentFactory" (Raw: "android.support.v4.app.CoreComponentFactory")
      A: android:isSplitRequired(0x01010591)=(type 0x12)0xffffffff

One difference I spotted is that, in the previous one, the debuggable attribute is positioned last, and in the latter one, it is positioned in the middle. And in the latter one, attributes are sorted in their resource ID (for debuggable the resource id is 0x0101000f, see the android source code for all resource ids).

Next, looking at AmBinaryEditor’s source code, in function AddAttribute:

        while(1)
        {
            if (list->next == NULL)
            {
                break;
            }
            list = list->next;
        }
        list->next = attr;
        attr->prev = list;

It appears that attributes are stored in linked lists, and when adding a new attribute, it is added to the end of the list. This fits our observation from aapt output.

So, in order to make it work, I need to insert the debuggable attribute in correct position. I quickly modified the AmBinaryEditor source code with a hard-coded position index 3:

        for ( int i=0; i<=3; i++ )
        {
            if (list->next == NULL)
            {
                break;
            }
            list = list->next;
        }
        ATTRIBUTE *attr_orignext = list->next;
        list->next = attr;
        attr->prev = list;
        attr->next = attr_orignext;

Then try to insert the attribute and build the APK again:

$ ~/projects/AmBinaryEditor/bin/Release/ameditor attr --add application -d 1 -n 'debuggable' -r 16842767 -t 18 -v true -i AndroidManifest.xml -o AndroidManifest.xml1
$ mv AndroidManifest.xml1 AndroidManifest.xml
$ cd -
$ ~/projects/AddSecurityExceptionAndroid/splitApktool.sh build apk2/

This time the attribute is correctly inserted in the middle, and successfully parsed upon installation! Success!

References

Root

With root, everything is possible. But I didn’t need to go down this path. Well, I had to admit that intercepting traffic in unrooted environment is time spent digging unimportant hole. Eventually I still rooted the phone to intercept the traffic, that is, after all, my real goal.

Magisk only

Xposed doesn’t work on Android 8.1+ yet, so if you need to get this to work, use Magisk, it works on most versions of Android.

This Magisk module will copy the user CA store into system CA store: https://github.com/NVISO-BE/MagiskTrustUserCerts

As of Magisk 20.3 . the repo above doesn’t take any effect, seemingly because it’s using an older Magisk module template that is no longer supported. A pull request is opened to fix it but has not been accepted by the original author yet. In the mean time download the module from here https://github.com/giacomoferretti/MagiskTrustUserCerts to get the working version. (Follow the Installation section to generate installable Magisk zip file.)

Note: How did I discover that the module was not working?
Go to /data/adb/modules/trustusercerts , and I found that post-fs-data.sh , which does exist in the repo, missing. If it installed properly, it should have existed in that directory.

In the end, after I finally got it to work, I found that the particular app that I was looking at seemingly employs some custom SSL pinning, so the Magisk module only allowed me to intercept some of the HTTPS messages. So I had to move on to the next approach.

Objection and Frida

  • Running android sslpinning disable from Objection shell – doesn’t work
  • Tried some popular Frida scripts, doesn’t work either.

EdXposed

In the end, I got everything working with EdXposed and this module: https://github.com/Fuzion24/JustTrustMe

I was able to intercept all traffic from the app.

References

General walk-through articles

Other tools

Deobfuscation

KDE Plasma VNC remote session

  • Remote VNC get new session separate from local console
  • Session will remain after disconnect by closing the window
  • Server will be terminated upon logging out from start menu
  • Need SSH forwarding for security
  • Screen resolution cannot be changed
  • Remote user logs in directly with VNC password, no user password needed
  • Need to generate VNC password using `vncpasswd` for the first time

Reference: https://wiki.archlinux.org/index.php/TigerVNC#System_mode