Compiling an Android Linux Kernel for Xperia Phones

Filed under Android, Hacking, Hardware, Linux, Xperia T
Tagged as

The Linux kernel for Sony Xperia phones is open source. So nobody is stopping you from compiling your own custom kernel. All you need is a Linux machine, the kernel sources and a couple of tools.


I will explain the steps needed to build, package and flash a custom kernel by using the Xperia T (LT30p) as an example. The steps should be pretty similar for all other modern Xperia phones.

Step 0: Preparation

A Linux machine

I used LMDE 64bit, which is basically Debian. But anything will do. If your machine is powerful enough to compile a regular kernel then it’s certainly powerful enough to compile the Android kernel.

The Kernel Source

Get the source from here: Sony Developer World – Open Source Downloads. Look out for the version that matches the version of your phone’s firmware. For instance I have a Xperia T (LT30p) with firmware 9.1.A.1.145. So the matching kernel is this one: Open source archive for build 9.1.A.1.145


You need to get a couple tools which will help you to build the kernel and to package the kernel into a flashable form.

  • A cross compiler toolchain for ARM processors
  • Fastboot: A superb firmware tool for Xperia phones
  • Android Platform tools: Fastboot
  • A stock kernel.sin or kernel.elf of the same firmware

I will mention where to get those tools in each respective section.

Step 1: Compiling the Kernel

In this tutorial, I’ll put all the necessary files in a working directory named /android. So if you are using a different working directory replace /android wherever necessary.

Before we start, we need to make sure that a couple required packages are installed (this is for Debian – use your favorite package manager if you are using a different Linux distro):

sudo apt-get install git libncurses5-dev lzop

Now download the kernel source. As I mentioned above, search for the kernel that matches your phones firmware. I used the one for the Xperia T.


Extract the kernel source:

mkdir source
cd source
tar xjf ../9.1.A.1.145.tar.bz2 

Since we are compiling a kernel for a different hardware architecture then the machine we are compiling on, we need a cross compiler which will generate a kernel binary that is compatible with our phone’s processor. The Xperia T uses an ARM processor. Therefore we need a cross compiler toolchain for the ARM processor. The git repositiry at contains a fully prebuilt toolchain. We only need to download it:

git clone

Now we need to set two environment variables in order for the build process to know for which architecture to build and which cross compiler to use:

export ARCH=arm
export CROSS_COMPILE=/android/arm-eabi-4.6/bin/arm-eabi-

We are ready to build the kernel. The first step is to pick the correct config:

cd /android/source/kernel/
make blue_mint_defconfig

This has fetched the correct config and has put it in the file .config. In my example blue_mint stands for the Xperia T. There is a file called README_Xperia, which contains the configuration names for the different phones the kernel source can be used for. If you have downloaded the correct source, then you should be able to find the configuration name for your phone in README_Xperia. If your phone is not listed, then you have probably the wrong kernel source. In my case the kernel source is only suitable for the Xperia T (blue_mint_defconfig) and the Xperia V (blue_tsubasa_defconfig).

The next two steps are exactly the same as compiling a kernel for a “normal” Linux machine. First we check the configuration:

make menuconfig

This will display a menu driven configuration screen. Here you can add and remove certain kernel features like file systems or device drivers. For example if you want to be able to mount NFS drives over the network, you need to enable NFS support in File Systems -> Network File Systems -> NFS client support.
Exit and save your configuration if you made changes.

Now all that’s left is to compile the kernel image:


This will take a while. Once it is finished, there will be a kernel image in arch/arm/boot/zImage.

Step 2: Building the kernel.elf

The kernel image needs to be combined with an initial ramdisk and a couple other files to form a kernel.elf file. The kernel.elf file can finally be flashed to the phone via a USB cable.

The initial RAM disk (initrd) is a temporary root file system that is mounted during system boot. It contains various executables and drivers that permit the real root file system to be mounted.

In order to acquire the ramdisk we need to extract it from an existing kernel.elf. You can extract the firmware from your phone or download the firmware from the Internet. The xda developers forum is an excellent place to start looking for firmware files for your phone. In my case I downloaded the stock firmware in FTF-form from here and extracted the kernel.elf from it. In many cases you can also find the kernel.elf itself. It’s most of the times called boot.img however.

The FTF-File is basically just a zip archive. Unzip it. We only need a file called kernel.sin from that archive. In order to further extract files from kernel.sin we use Flashtool. It can be downloaded from Flashtool’s website. Extract and start Flashtool:

7z e flashtool-
tar xf flashtool-
cd FlashTool
sudo ./FlashTool

Flashtool can extract kernel.sin for us now. In the menu click on Tools->Sin Editor. Then select the sin file and extract the data:

This will give you the kernel.elf file. The kernel.elf itself needs to be extracted as well. Click on Tools->Extractors->Elf and select the elf file:


You should end up with following files:

-rw-r--r-- 1 norbert norbert 20971520 Oct 12 00:55 kernel.elf
-rw-r--r-- 1 norbert norbert      130 Oct 12 00:57 kernel.elf.bootcmd
-rw-r--r-- 1 norbert norbert     1072 Oct 12 00:57 kernel.elf.cert
-rw-r--r-- 1 norbert norbert  5655400 Oct 12 00:57 kernel.elf.Image
-rw-r--r-- 1 norbert norbert  1512035 Oct 12 00:57 kernel.elf.ramdisk.gz
-rw-r--r-- 1 norbert norbert   133372 Oct 12 00:57 kernel.elf.rpm.bin
-rw-r--r-- 1 norbert norbert       16 Oct 12 00:55 kernel.partinfo
-rw-r--r-- 1 norbert norbert  7306651 Jul 25 11:54 kernel.sin

Now we need a little script, which let’s us reassemble all the parts again to our own kernel.elf. It can be downloaded from Sony’s Developer Site:


It contains only a sinlge python script named Put that script into the same directory where you extracted the original kernel.elf.

Now also copy our compiled custom kernel into the same directory:

cp /android/source/kernel/arch/arm/boot/zImage .

All that’s left is to combine all the parts to our new kernel.elf. I will call it mykernel.elf here. The command line deviates for different phone models. Use goggle to find the appropriate arguments. Again, this example is for the Xperia T:

python -o mykernel.elf zImage@0x80208000 kernel.elf.ramdisk.gz@0x81400000,ramdisk kernel.elf.rpm.bin@0x20000,rpm kernel.elf.bootcmd@0x00000000,cmdline

That’s it. mykernel.elf is now ready to be flashed to the phone.

Step 3: Flashing the new Kernel

We need fastboot from the android platform tools, which can flash the kernel to the phone:

sudo apt-get install android-tools-adb android-tools-fastboot

If you can’t find a package for your distro, you can also get the platform tools by downloading the Android SDK.

Now turn off your phone. Once the phone has shut down, hold the Vol+ button while plugging in the USB cable. The phone should now be in fastboot mode, which is indicated by a blue lit LED. Note that the button to get into fastboot mode is possibly different on other Sony phones. Take a look at this overview.

Check if the connection is ok:

norbert@lmde:/android/mykernel$ sudo fastboot getvar version
version: 0.5
finished. total time: 0.006s

Finally we can flash our kernel:

norbert@lmde:/android/mykernel$ sudo fastboot flash boot mykernel.elf 
sending 'boot' (7129 KB)...
(bootloader) USB download speed was 5427kB/s
OKAY [  1.368s]
writing 'boot'...
(bootloader) Flash of partition 'boot' requested
(bootloader) S1 partID 0x00000003, block 0x00003000-0x0000cfff
(bootloader) Erase operation complete, 0 bad blocks encountered
(bootloader) Flashing...
(bootloader) Flash operation complete
OKAY [  0.434s]
finished. total time: 1.802s

We can also reboot the phone via fastboot:

sudo fastboot reboot

If all went well, you should see the Sony logo and the phone should boot as usual. Congratulations, your phone is now the owner of a brand new kernel.


In Case of Disaster

Of course to flash a home made kernel is nothing for the faint of heart. If something goes wrong, and believe me, something will go wrong sooner or later, your phone will not greet you with the familiar Sony logo again. Instead it will sulk with a black screen. You bricked it.

But don’t panic. Sony phones are pretty sturdy and it is almost impossible to really brick them. Usually you only need to soft-reset them to get into fastboot mode again.

Unplug und plug in the USB-cable. Hold Vol+ and the ON-button at the same time for a couple seconds until you feel a single vibration. Let go of the ON-button but keep the Vol+ pressed. You should be in fastboot mode again indicated by the blue LED. Now you can reflash the original kernel.elf with fastboot in order to get the phone working again.

If you can’t get your phone into fastboot mode, check out following xda-developer post, which contains everything you need to know to unbrick your phone: Recover ALL 2012 XPERIAs from SOFT-brick.

How to Troubleshoot CIFS Problems on Android and Linux in General

Filed under Android, Hacking, Linux, Xperia T
Tagged as , , , ,

If you are using Linux kernel 3.4, especially on Android, and you are having troubles mounting windows or samba shares, and you are in a hurry, please skip to section 2 or the conclusion at the end. I just need to tell a little story first.

1. The Story of a Fool

That’s how it’s supposed to be: You take out your Android phone. Mount a windows share to a directory of your liking and start accessing your files from any app you like. Yes, eat that, iPhone. Welcome to the 21st century. At least that’s how it used to be on my Xperia T until I updated the firmware to a newer version.

To be more precise, you need a rooted Android phone. And an app like CifsManager. It’s pretty easy. You can enter a number of windows shares, including user name and password if necessary, and the paths you want them to be mounted to. From now on, you can mount and unmount the shares with a single click. Really neat.

Then one day, I updated the firmware of my Xperia T from 7.0.A.3.195 (Android 4.0.4) to 9.1.A.0.489 (Android 4.1). And the days of happy networking with windows were gone. CifsManager stubbornly denied mounting with an “Invalid argument” error. A very helpful error, I must say.

After hours of googling around, I found many articles, telling me that the linux kernel of the new firmware is probably missing CIFS support. If I am lucky, somebody will create a CIFS kernel module, which could be loaded with insmod. Again many hours later I realized that nobody seems to have done that for my phone. So I gave up, hoping that the issue would be fixed in another firmware.

Half a year later and two firmware versions later (9.1.A.1.140 und 9.1.A.1.145), the issue was still there. I knew that the kernel for Xperia phones is open source (Sony, you roxor!!!). Having compiled Linux kernels a couple of times, I thought it shouldn’t be that hard to compile one for my phone in order to get a CIFS kernel module. So many hours later with the help of many posts on xda developers, I finally created my own Android Linux kernel. Getting cifs.ko as a byproduct. Actually I could have complied the CIFS module into the kernel and used the whole kernel. But I didn’t want to mess around to much and decided to compile CIFS as a kernel module and just take the module by itself.

I copied cifs.ko to my phone, and gave it a try:

root@android:/system/lib/modules # insmod cifs.ko
insmod: init_module 'cifs.ko' failed (Invalid argument)

Of course, another “Invalid argument” error. Would have been too easy. So after asking Dr. Google again, I found out that kernel errors can be viewed with dmesg. So hopefully there is a kernel message that will tell me what’s wrong:

255|root@android:/system/lib/modules # dmesg
<3>[ 5268.808545] cifs: module is already loaded

?!!$!Q§%%”.. Dang. Only then, I realized that CIFS was included in my kernel all along. All those articles about CIFS missing in the kernel were leading me to the wrong conclusion. While in reality, my princess was in another castle altogether.

2. Troubleshooting CIFS

A note here, that everything that follows is not necessarily specific to Android phones. It’s in fact pure Linux stuff.

It’s actually pretty easy to check if CIFS support is included in the kernel. Check if following directory exists. I wish I would have known that a while ago:

root@android:/ # l /proc/fs/cifs
-r--r--r-- root     root            0 2013-10-07 19:47 DebugData
-r--r--r-- root     root            0 2013-10-07 19:47 LinuxExtensionsEnabled
-r--r--r-- root     root            0 2013-10-07 19:47 LookupCacheEnabled
-r--r--r-- root     root            0 2013-10-07 19:47 MultiuserMount
-r--r--r-- root     root            0 2013-10-07 19:47 SecurityFlags
-r--r--r-- root     root            0 2013-10-07 19:47 cifsFYI
-r--r--r-- root     root            0 2013-10-07 19:47 traceSMB

So if you have that, you are almost there. If you don’t, check if there is a kernel module for CIFS. It’s usually somewhere in /lib/modules/kernel/fs/cifs/cfis.ko. In case of my Android phone, it would have been in /system/lib/modules though. I guess Android phones are a little bit different. If you don’t have that either, you are probably out of luck and you are really in the position that I thought I was in. Ask around in forums. Chances are, that there is a custom kernel that has CIFS integrated or that somebody actually compiled a CIFS module. Although chances are slim, because there are so many phones and the kernel is usually specific to each model. Anyway, xda developers is a great starting point.

Now I tried mounting from the shell (get Terminal Emulator or something for that). Here is an example:

root@android:/ # mount -o username=guest,password=guest -t cifs // /sdcard/shares/nas
mount: Invalid argument

Yes that “Invalid argument” error was actually the same error that CifsManager was throwing at me all along. Let’s check the kernel messages with dmesg again:

<3>[ 5924.225392] CIFS VFS: Connecting to DFS root not implemented yet

Huh, what the heck is that supposed to mean? Luckily, since I already had the kernel source, I was able to search for that error in the source code. Note, that the kernel version for my phone was 3.4.0. While looking around in the source code. I found out that one can enable additional debug messages by writing 1 into /proc/fs/cifs/cifsFYI:

255|root@android:/system/lib/modules # echo 1 > /proc/fs/cifs/cifsFYI

Now I tried mounting again and checked with dmesg once more:

<7>[ 7213.456950] cifsfs.c: Devname: // flags: 32768
<7>[ 7213.457041] connect.c: Username: guest
<7>[ 7213.457102] connect.c: file mode: 0x1ed  dir mode: 0x1ed
<7>[ 7213.460002] connect.c: CIFS VFS: in cifs_mount as Xid: 2 with uid: 0
<7>[ 7213.460063] connect.c: UNC: (null) ip: (null)
<3>[ 7213.460093] CIFS VFS: Connecting to DFS root not implemented yet
<7>[ 7213.462535] connect.c: CIFS VFS: leaving cifs_mount (xid = 2) rc = -22

Finally something useful. Especially the line containing UNC: (null) ip: (null) looked fishy. Why was there no UNC path nor IP address? So I looked further into the CIFS source code. Especially in connect.c, since all the messages originated from there. I was really surprised about what I saw there. The device name (// in my case) was nowhere used. I don’t know if that’s a bug in kernel 3.4.0. At least it’s a very unexpected feature. Instead one has to specify the UNC path with the mount parameter unc. After a little trial and error, I also noticed that the unc path needed to be specified with backslashes \ not slashes /. So here we go again:

root@android:/ # mount -o unc=\\\\\\Public,username=guest,password=guest -t cifs none /sdcard/shares/nas

Wow. No error message. As you can see, I didn’t even specify the UNC path as device (just none). Only as mount parameter. Also note, that the backslashes need to be escaped by a backslash again, because they have a specialy meaning in the shell (\\\\ actually means \\).

Checking with mount if it was actually successful:

root@android:/ # mount
none /storage/sdcard0/shares/nas cifs rw,relatime,sec=ntlm,unc=\\\Public,username=guest,uid=0,noforceuid,gid=0,noforcegid,addr=10.
0.0.200,file_mode=0755,dir_mode=0755,nounix,serverino,rsize=61440,wsize=65536,actimeo=1 0 0

Finally. After many month I was able to connect my phone to windows shares again.


So, long story short, my problem was very specific to kernel 3.4. As I updated my phone to a new firmware, I also got a new kernel version, which didn’t accept the usual mount syntax. You need to specify the UNC path as a mount parameter and not as a device name. Like this:

mount -o unc=\\\\\\Public,username=guest,password=guest -t cifs none sharedir

I guess that is not that big of a deal on full blown Linux machines, because you can also use mount.cifs. But that command is not available on Android devices.

From what I saw in the CIFS kernel sources, the “Invalid argument” error is issued by virtually every problem. So there could be 1000 possible problems. And all you get is “Invalid argument”. And that error does not necessarily mean, that CIFS is not supported (as it is stated in many posts around the Internets). Actually you get a different error if CIFS is not supported (unknown filesystem type). Most likely it is another problem. Looking at the kernel messages was really helpful.

Furthermore, I verified that a more recent kernel actually works like one would suspect:

mount -t cifs \\\Public sharedir

Nevertheless, following steps should be helpful regarding any mounting problems regarding CIFS (or other filesystems):

  1. Verify that CIFS is supported by the kernel by checking if /proc/fs/cifs exists.
  2. You will get “Invalid argument” for about any Problem. You need to check the kernel messages with dmesg.
  3. Put the value “1″ into /proc/fs/cifs/cifsFYI to get more detailed kernel messages.

See you around.

How to install Linux Mint Debian Edition (LMDE) on an encrypted hard drive.

Filed under IT, Linux, VirtualBox

Debian is my favorite distro, but I usually don’t recommend it as a desktop environment. LMDE is a nice alternative for Debian fans who want to use Debian as their everyday home/work desktop environment. Unlike Ubuntu, it’s a pure Debian installation (basically Debian Testing), but it uses a more agile and up to date package repository. Many software packages which are known to be a hassle on Debian desktops run seamlessly out of the box. Who hasn’t cried out “Come on Debian, who needs Iceweasel. I want the latest version of Firefox with Flash running”. LMDE goes a little bit along the line “I like to have my cake and eat it too”.

One thing that bothered me tough, is that the LMDE installer didn’t offer any options to encrypt the partition(s). I found a howto by hashstat Howto install LMDE with LVM (with or without encryption). However I didn’t like the idea of transforming the Live CD into the final system.

The main part of the solution I describe here is heavily based on hashstat howto. However, we install LMDE onto a virtual machine and then transfer it onto the encrypted partition. There are three steps:

  1. Install LMDE on a VirtualBox VM
  2. Prepare encrypted disk
  3. Transfer LMDE from the virtual machine to the real machine

In my case, I installed LMDE on a virtual machine on my notebook. If you don’t have an extra computer, it might also be possible to install LMDE on a flash drive and use it later to transfer it to your real machine. However I didn’t try that. The virtual machine method was fitting for me, since I had LMDE already installed on it for a test drive.

Step 1: Install LMDE on a VirtualBox VM

I was using Virtual Box. The step is pretty similar to any other VM (e.g. VMWare).

So let’s create a new virtual machine. Select Linux / Debian (or Debian 64bit if you are going to run a 64bit LMDE) as the guest system. I gave it 1GB of RAM and the default 8GB disk space.

In the settings I changed the network adapter to “Bridged” mode.

When you start the VM for the first time, it asks for a boot media. Select the LMDE image (e.g. linuxmint-debian-201012-gnome-dvd-amd64.iso). The Rest is straight forward. Install LMDE as you would do on a stand alone PC. You don’t need to create different partitions for /boot or /home as you would do it on a real install, since we transfer the whole file system later on anyway.

Make sure that ssh is installed, so we can remotely copy everything over the network onto our real machine later on.

sudo -s
apt-get update
apt-get install ssh

Give root a password, so we can log into the virtual machine remotely as root:

passwd root

Test to see if we can log into the machine as root over ssh:

ssh root@localhost

It should ask you for your password and let you log in.

Step 2: Prepare encrypted disk

Now boot into the LMDE Live DVD on your machine where you want LMDE to be installed. Open a shell and install some additional packages that we need to create an encrypted partition.

sudo -s
apt-get update
apt-get install lvm2 squashfs-tools cryptsetup

Create the partitions

On my system the drive was on /dev/sda. So from here on I use it as the disk device. Please replace it with the device that is on your system (e.g. /dev/hda). We need to create the partitions from scratch. We can use gparted for that purpose.

gparted /dev/sda

Create a new partition table from the Device Menu. Then add a 256MB boot partition at the beginning of the drive. The rest of the drive is filled with a single unformated partition. This will contain the encrypted later on. In case you have already had data on your disk, you might need to delete preexisting partitions. Here is a screenshot of the final result.

Now we can encrypt the partition. It’s also a good idea to fill the partition with some random data to counteract certain key recovery techniques.

dd if=/dev/urandom of=/dev/sda2 bs=1M
cryptsetup luksFormat /dev/sda2
cryptsetup luksOpen /dev/sda2 sda2_crypt

The first cryptsetup command creates the encryption. It will ask you for a password. Never forget it! The second command opens the encrypted device at /dev/mapper/sda2_crypt.

In the next step we create several LVM logical volumes. You can think of a logical volume as some sort of partition that lives inside the encrypted partition. We also create the swap partition as a logical volume. That way the swap space is also encrypted.

pvcreate $VOLUME
vgcreate volumes $VOLUME
lvcreate -n lmde -L 10G volumes
lvcreate -n swap -L 2G volumes
lvcreate -n home -L 50G volumes

This will create a logical volume for the root, swap and home filesystem. If you want the last volume to fill the rest of the volume group, just enter a bigger amount then there is actually space left. From the resulting error message, you get the amount of extents left.

  Volume group "volumes" has insufficient free space (959 extents): 12800 required.

In my case I had 959 extends left. So I created the last volume by specifying the exact amount of extends:

lvcreate -n home -l 959 volumes

It’s time to create file systems on our freshly created volumes. The boot partition will be formatted with ext2. root and home are formated with a journaling file system (e.g. ext4).

mkswap -L swap /dev/volumes/swap
swapon /dev/volumes/swap
mkfs -t ext2 -L boot /dev/sda1
mkfs -t ext4 -L root -j /dev/volumes/lmde
mkfs -t ext4 -L home -j /dev/volumes/home

Step 3: Transfer LMDE from the virtual machine to the real machine

During the following steps, we copy all the data from the LMDE installed on the virtual machine onto our real system.

In order to access our new file system, we need to mount it.

mount /dev/volumes/lmde /mnt
mkdir /mnt/boot /mnt/home
mount /dev/sda1 /mnt/boot
mount /dev/volumes/home /mnt/home

Now we are ready to transfer the LMDE installation. We are going to use rsync over ssh to copy all the data from the virtual machine. Replace remotehost with the IP address of the virtual machine. You can use ifconfig on the VM to find out it’s IP.

rsync -avz --exclude=proc --exclude=sys --exclude=dev/pts -e ssh root@remotehost:/ /mnt

We need to edit a few files to reflect the physical properties of our machine.

Edit /mnt/etc/crypttab to set your encrypted device:

sda2_crypt      /dev/sda2       none    luks

Edit /mnt/etc/fstab to reflect your devices. Basically delete everything except the lines for proc und cdrom and add the mount points for / /boot /home and the swap space.

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
LABEL=boot      /boot           ext2    defaults        0       2
/dev/volumes/lmde /             ext4    errors=remount-ro 0     1
/dev/volumes/home /home         ext4    defaults        0       2
/dev/volumes/swap none          swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto 0       0

Copy the current resolv.conf to the new system. That way we have a name server defined and can access the internet for a package update later on.

cp /etc/resolv.conf /mnt/etc/

Now it’s time to chroot into the new file system. chroot changes the apparent root directory for the current running process. That way we can work on our new system as if it’d be the current file system, even though it isn’t. We also need to mount some special devices in order for the devices and kernel files to be accessible.

cp /etc/resolv.conf /mnt/etc/
mount --bind /dev /mnt/dev
chroot /mnt
mkdir /sys
mkdir /proc
mount -t sysfs none /sys
mount -t proc none /proc
mount -t devpts none /dev/pts

At first, we install some additional packages, so our system is able to access the encrypted partition and the volumes during the boot process.

apt-get update
apt-get install cryptsetup lvm2

In case you’ll edit /etc/crypttab after installing cryptsetup, you need to run update-initramfs -u in order for the settings to be used during boot time.

Finally we need to install a boot loader. I was using grub which is pretty much standard. Accept the default options. When asked for the GRUB install device, select /dev/sda (or whatever your disk device is).

dpkg-reconfigure grub-pc

At this point we are actually done. But it doesn’t hurt to unmount everything and back out from chroot again.

umount /dev/pts
umount /proc
umount /sys
umount /mnt/dev
umount /mnt/home
umount /mnt/boot
umount /mnt

Let’s make sure everything was written to the file system and reboot.


If all went well, you will see the GRUB boot loader during start up. Shortly after that, the system will ask you for the password in order to access the encrypted file system. After the boot process has finished, you are ready to use your brand new LMDE on an encrypted drive. Have fun.

Geo Relocate Firefox Extention

Filed under Firefox, Geo Relocation Firefox Extension, Internet, Programming, Uncategorized
Tagged as , , ,

The other day I wrote about my experiments with Firefox’s implementation of the W3C geo location API.

I was able to package my modifications as a Firefox Extension, which anybody can use to the change the coordinates reported by the geo location API. Check it out at the Geo Location Firefox Extension project page.

Copy Paste Hell Between Windows Host and Linux Guest in VirtualBox

Filed under Linux, VirtualBox
Tagged as , , ,

The X server maintains three different selection buffers, PRIMARY, SECONDARY and CLIPBOARD. The PRIMARY buffer is usually used for copying and pasting via the middle mouse button. And the CLIPBOARD buffer is used when the user selects some data and explicitly requests it to be “copied” to the clipboard, such as by invoking “Copy”.

The VirtualBox client tool synchronizes the Windows clipboard content to the PRIMARY and CLIPBOARD buffer of a Linux host. But it only synchronizes the CLIPBOARD buffer back out to the Windows host. The reason why it’s not using the PRIMARY selection is that by only selecting a text in Linux overwrites the Windows clipboard immediately, which is an unexpected behavior for Windows users. At least that’s their excuse.

It might not be the expected behavior for Windows users. But it’s definitely not the expected behavior for Linux users. The forums are filled with posts like “Copy/Paste from Linux guest to Windows doesn’t work”. It is especially annoying that the selected text from a terminal window can not be easily pasted out to the Windows host. There are a couple solutions. But none are really satisfying.

Automatically synchronize PRIMARY and CLIPBOARD

There are tools which let you synchronize the different buffers. For example autocutsel can do this by running two instances of it to work in both ways:

# keep clipboard in sync with primary buffer
autocutsel -selection CLIPBOARD -fork

# keep primary in sync with clipboard buffer
autocutsel -selection PRIMARY -fork

This causes the contents of the PRIMARY buffer to be automatically copied into the CLIPBOARD buffer and vice versa. So the selected text is immediately pastable on the host Windows. The -fork argument lets autocutsel run in the background. All that’s left to do is to insert those lines in the startup/init script of your choice to make it permanently available.

However there is a big disadvantage. The buffers are not meant to be mixed together. In many applications you can select some text and paste over a previously copied text by hitting CTRL-V or SHIFT-Ins. This stops to work, because as soon as you select a text it is synchronized from the PRIMARY to the CLIPBOARD buffer and pasting would just paste the selected text over the selected text again.

Manually synchronize PRIMARY and CLIPBOARD

To synchronize the buffers all the time does not seem to be best solution. But we can copy the PRIMARY buffer to CLIPBOARD on demand. For example:

xsel | xsel -b

xsel is a very useful tool which lets you pipe content in and out different buffers. The above command pipes to content of PRIMARY into stdout which is then piped back into CLIPBOARD. The argument -b tells xsel to use CLIPBOARD. Without an argument it uses PRIMARY. xsel is a very cool tool. Check out it’s man page for different uses.

We could probably package this command into a script and bind it to a system wide keyboard shortcut. That way we would only have to hit this shortcut after selecting some text to be able to paste it on the windows side.


Different applications offer different shortcuts to copy to and from the CLIPBOARD buffer. For example, the Gnome Terminal uses CTRL-SHIFT-C/CTRL-SHIFT-V. Other applications support the Windows style shortcuts CTRL-C/CTRL-V or CTRL-Ins/SHIFT-Ins (e.g. Firefox). However it’s a hit and miss because there doesn’t seem to be a standard way to copy/paste in X11. So while it’s possible in most programs it’s difficult to remember which program uses which shortcut. Most of the time I keep hitting different shortcuts until it works. It’s a mess.


To make a long story short. I have not found a really satisfiable solution. Either of the above solutions work, but they are either awkward or have unwanted side effects. So if you find a better solution or any other solution at all, please let me know by leaving a comment.

Spoofing W3C Geolocation from a Different Angle

Filed under Firefox, Hacking, Internet
Tagged as , , ,

The other day I watched an episode of Hak5 – Spoofing the W3C Geolocation API were Darren was introducing us to the W3C geolocation API. (Btw. Hak5 is so awesome, you definitely have to check them out right now if your don’t know them already). This API uses certain informations like IP address, RFID, WiFi and Bluetooth MAC addresses, and GSM/CDMA cell IDs to determine the exact location of a users computer. It’s implemented by browsers like Firefox and Chrome.

One can test the geolocation API by going to Google Maps and clicking on the “Show MyLocation” icon. The geolocation API is actually pretty simple. Browsers make it available through a JavaScript object, which can be queried by the web application for the location. For privacy reasons, the browser will ask you if it is allowed to detect your location, since it has to transfer WiFi addresses and the likes over the net.

    <script type="text/javascript">
    // locate position

    // callback function
    function displayPosition(pos) {
        var mylat = pos.coords.latitude;
        var mylong = pos.coords.longitude;
        alert('Your longitude is :' + mylong + ' and your latitude is ' + mylat);

A more complete example script including showing your position on a map can be found here.

Darren presented his ongoing work on spoofing the API by generating fake 802.11 beacon frames to fool the API into finding a fake location. His marvelous approach was not quite successful yet, but it definitely made me think about it. Since the API is implemented in the browser, it should be possible to hack the browser to spit out wrong data. The easiest approach would be to use Firefox since it is open source.

So I downloaded the latest source of Firefox and started my own build. Since Darren is using BackTrack 5 for a while,  and I was just playing with it too, I was going to compile the source on BackTrack 5. I am explaining the process of compiling Firefox in another post Building Firefox on BackTrack 5. There were  a couple showstoppers at first, but it finally worked out and in the end I had my first self made build of Firefox. I also had built it with debug symbols, so I could debug it with ddd.

Moving Somewhere Else

After looking around for a while it seemed like some stuff was happening in mozilla-central/dom/src/geolocation/nsGeolocation.cpp. But the actual location data was coming from somewhere else. After examining the call stack it seemed to be coming from something called NetworkGeolocationProvider. After searching around a bit, I realized that it was actually JavaScript code that was located in mozilla-central/dom/system/NetworkGeolocationProvider.js. This was just being too good, since debugging C++ is somewhat painful. Now I could modify this JavaScript file and only needed to restart Firefox. There was no further recompilation required from now on.

But changing NetworkGeolocationProvider.js didn’t work out that well at first. After playing around with it for a while I found out that there is a startupCache folder, which holds components like NetworkGeolocationProvider.js in a compact form. It’s located in Firefox’s user directory. That cache has to be deleted in order for the changes in NetworkGeolocationProvider.js to take effect.

NetworkGeolocationProvider.js is actually not that big. It basically makes a json rpc call to Google’s Geolocation API at Most of the stuff happens in method onChange. I changed the method to skip the rpc call altogether:

onChange: function(accessPoints) {
    LOG("onChange called");
    this.hasSeenWiFi = true;

    var obj = {
        location: {
            latitude: 38.897669,
            longitude: -77.03655,
            accuracy: 18000

    var newLocation = new WifiGeoPositionObject(obj.location);
    var update = Cc[";1"].getService(Ci.nsIGeolocationUpdate);

Now let’s see what Google Maps thinks where my computer lives:

Holy cow, it worked. Google Maps was telling me that my machine was somewhere inside the white house.I just hope the secret service is not going to knock on my door for this.

More Debug Output

By studying NetworkGeolocationProvider.js I found out some interesting things. There is a Firefox preference setting that enables debug output to the console. It can be enabled by adding following line to your Firefox prefs.js:

user_pref("geo.wifi.logging.enabled", true);

The output looks something like this (values changed for privacy reasons):

*** WIFI GEO: startup called.  testing mode isfalse
*** WIFI GEO: watch called
*** WIFI GEO: onChange called
*** WIFI GEO: provider url =
*** WIFI GEO: client sending: {"version":"1.1.0","request_address":true,"access_token":"2:HgigENDl1b5hI9PH:KgYfCECwwjxHjuRX","wifi_towers":[]}
*** WIFI GEO: xhr onload...
*** WIFI GEO: service returned: {"location":{"latitude":-78.070714,"longitude":15.439504,"address":{"country":"Austria","country_code":"AT","region":"xxx","county":"xxx","city":"xxx","street":"xxx","postal_code":"xxx"},"accuracy":18000.0}}
*** WIFI GEO: shutdown  called

People like Darren might be interested in the values transmitted inside the parameter “wifi_towers”. Since I do not have wifi radio in this machine, it’s empty.

There is also another pref setting called geo.wifi.uri. One can specify a different server instead of the default server on per default. It would not be very hard to implement a simple JSON server which would spit out fake data. That way we wouldn’t even need to modify the browser.

Using the Geolocation API without a Browser

It might be useful to play around with Google’s API without having to go through the browser. I wrote a simple python script, that makes it possible to request the geo location data without even using the browser:


import json
import urllib2

url = ""
headers = {'Content-Type': 'application/json'}
args = {"version" : "1.1.0", "request_address" : True, "wifi_towers" : []}

# create arguments in json format
data = json.dumps(args)
print 'Sending:  ' + data

# perform the http post request
req = urllib2.Request(url, data, headers)
f = urllib2.urlopen(req)

# read the response
response =
print 'Response: ' + response

The script returns the same data that we already saw in the browsers debug log:

Sending:  {"request_address": true, "version": "1.1.0", "wifi_towers": []}
Response: {"location":{"latitude":-78.070714,"longitude":15.439504,"address":{"country":"Austria","country_code":"AT","region":"xxx","county":"xxx","city":"xxx","street":"xxx","postal_code":"8010"},"accuracy":25000.0},"access_token":"2:NoLQveYeYxfkNboW:efp2-9AbcWBLNbcE"}


I only scraped the surface of how the browser is obtaining the geo location data. Web apps should obviously not rely on the data returned by the geo location API for live and death situations. The W3C Spec clearly states:

No guarantee is given that the API returns the device's actual location.

It is nevertheless a very useful tool and learning about what the browser is doing under the hood to acquire the geo data was certainly fun.

Building Firefox on BackTrack 5

Filed under Firefox, Linux
Tagged as , , ,

Building Firefox on Linux is actually pretty easy. It used to be a bit harder with earlier versions of Firefox, but since Gecko 5.0 it’s not hard at all.

Since we are building on BackTrack 5, which is a Debian offspring, we more or less need to follow the instructions on building Firefox on Ubuntu. Some details about that can be found at Simple Firefox build. As you will see later, the only tricky part is to get all the required packages for the build process.

Get the Source, Luke

First we have to install the suggested packages with apt-get:

root@bt:~/firefox# apt-get install mercurial libasound2-dev libcurl4-openssl-dev libnotify-dev libxt-dev libiw-dev mesa-common-dev autoconf2.13 yasm

Now, let’s get the latest Firefox source through mercurial:

root@bt:~/firefox# hg clone

All that is left to do is to start the build:

root@bt:~/firefox/mozilla-central# make -f

Missing Dependencies

At the beginning of the build process it checks that all the needed build tools and libraries are installed. Unfortunately, it complained that the installed version of yasm was too old in my case:

checking for YASM assembler... checking for yasm... yasm
configure: error: yasm 1.0.1 or greater is required to build with libjpeg-turbo's optimized JPEG decoding routines, but you appear to have version 0.8.0.  Upgrade to the newest version or configure with --disable-libjpeg-turbo to use the pure C JPEG decoder.  See for more details.
*** Fix above errors and then restart with               "make -f build"
make[2]: *** [configure] Error 1
make[2]: Leaving directory `/root/firefox/mozilla-central'
make[1]: *** [obj-x86_64-unknown-linux-gnu/Makefile] Error 2
make[1]: Leaving directory `/root/firefox/mozilla-central'
make: *** [build] Error 2

dpkg -l also told me the same thing. The installed version of yasm was 0.8.0-1:

root@bt:~/firefox/mozilla-central# dpkg -l | grep yasm
ii yasm 0.8.0-1 modular assembler with multiple

I didn’t find a newer version of yasm in the BackTrack repository. So I went to Debian Packages and downloaded the current release of yasm for Debian Testing, which was 1.1.0-1 at the time. Since I am running BackTrack 5 64bit, I needed to get the amd64 version.

root@bt:~# wget

A .deb package can be installed with dpkg –install. It replaces a currently installed version:

root@bt:~# dpkg --install yasm_1.1.0-1_amd64.deb
(Reading database ... 214367 files and directories currently installed.)
Preparing to replace yasm 0.8.0-1 (using yasm_1.1.0-1_amd64.deb) ...
Unpacking replacement yasm ...
Setting up yasm (1.1.0-1) ...
Processing triggers for man-db ...

After trying to build again, it complained about the missing libIDL2-0:

checking for libIDL-2.0 &gt;= 0.8.0... Package libIDL-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `libIDL-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'libIDL-2.0' found
configure: error: Library requirements (libIDL-2.0 &gt;= 0.8.0) not met; consider adjusting the PKG_CONFIG_PATH environment variable if your libraries are in a nonstandard prefix so pkg-config can find them.
*** Fix above errors and then restart with               "make -f build"
make[2]: *** [configure] Error 1
make[2]: Leaving directory `/root/firefox/mozilla-central'
make[1]: *** [obj-x86_64-unknown-linux-gnu/Makefile] Error 2
make[1]: Leaving directory `/root/firefox/mozilla-central'
make: *** [build] Error 2

We can find the name of the missing package with the help of apt-cache.

root@bt:~/firefox/mozilla-central# apt-cache search libidl
libidl0 - library for parsing CORBA IDL files
libidl-dev - development files for programs that use libIDL

Usually when compiling stuff, we need the -dev version of a package, which contains header files and linkable libraries required for a build. Therefore we can assume that libidl-dev is the missing package. We can install it with apt-get since it is in the BackTrack repository this time:

root@bt:~/firefox/mozilla-central# apt-get install libidl-dev

A Monster Is Born

All dependencies should be fulfilled now, and it should compile without further complaints. Once the build has finished, you can find the executable at


I suggest, that you compile the source on a strong machine. I was doing it inside a VirtualBox VM, which was not such a good idea. The built took about three hours. The VM even kept crashing on me while building I had to give the VM 2 gigs of RAM (initially 1GB) to make it work.

Debug Build

Since I wanted to find out something about the internal workings of Firefox, it would be handy if I could run Firefox inside a debugger like ddd. It’s reasonable to compile Firefox with debug symbols for that reason. That can be accomplished by specifying special compile setting within a file named .mozconfig in the root folder of the Firefox source tree:

root@bt:~/firefox/mozilla-central# echo "ac_add_options --enable-debug-symbols" > .mozconfig

You can find out more about configuring the build at Configuring Build Options and Building Firefox with Debug Symbols.

Resizing a Linux Partition (Running in VirtualBox)

Filed under IT, Linux, VirtualBox
Tagged as , , , , ,

I was recently setting up a new test environment (BackTrack 5) inside a VirtualBox VM. Half way down the road of my project I realized that I was way to stingily with the amount of space I assigned to the virtual drive. Well, I thought no problem – I am going to enlarge the virtual drive. The steps would be easy:

  1. Enlarging the virtual drive
  2. Enlarging the partition holding the root file system with parted
  3. Enlarging the file system with resize2fs

Easy in theory. In reality it was still a little bit tricky.

Note, that everything that I describe in steps 2 and 3 also applies to a real linux machine. Step 1 is only necessary, because I happened to work inside a VM. On a real machine, step 1 would be something like copying the data of the old physical  drive onto a newer bigger drive with dd or the likes.

I also realize that there may be more professional programs like partition magic, which would have saved me some trouble. But I wanted to make it work right here and there with the things I had at hand. Which was parted. Or even the plain old fdisk would have done I guess.

It also goes without saying, that before you do things like this, that you need to make a backup if the data is of any value to you at all.

Step 1: Enlarging the Virtual Drive

This was the easy part. The virtual drive is actually a .vdi file. There is a VirtualBox command line tool, that can resize the drive. On a real computer, this would be like copying the hard drive data onto a larger disk.

My host computer is running Windows. The virtual drive was initially set to have a size of 8GB. I wanted it to be twice as big. So the command looks like:

C:\Users\Netzgewitter\VirtualBox VMs\BackTrack5>"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd "BackTrack 5.vdi" --resize 16384

The operation only took a second. The size is given in MB so 16GB = 16*1024MB = 16386MB.

Step 2: Resizing the Partition

I booted up the VM and checked the hard drive size with fdisk -l. Yep, /dev/sda1 had now 16GB. So all I had to do was to enlarge the partition holding the root file system to 16GB and then resize the file system with resize2fs.

Parted didn’t let me resize the partition, since it was mounted as the current file system at /. That was expected. So I booted Linux from a live CD. I used the BackTrack 5 CD since I had it at hand. But any other live CD like Knoppix would have done just as well.

Once I’d booted into the live CD I checked the partitions again with parted:

GNU Parted 2.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit cyl
(parted) print
Disk /dev/sda: 2088cyl
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 2088,255,63.  Each cylinder is 8225kB.
Partition Table: msdos

Number  Start   End      Size    Type      File system     Flags
 1      0cyl    993cyl   993cyl  primary   ext4            boot
 2      993cyl  1044cyl  50cyl   extended
 5      993cyl  1044cyl  50cyl   logical   linux-swap(v1)

As you can see, I switched the units to cylinders with “unit cyl“. That way it would be easier to determine the new end of the partition. Since there was a swap partition at the end of the drive, it was not possible to just resize the partition holding the file system. First I had to move the swap partition to the end of the drive. There is a command in parted to move a partition. But it didn’t let me move an extended partition. Since that was not possible I tried to remove partition 2 and recreated it at the end of the drive:

(parted) rm 2
Error: Partition /dev/sda2 is being used. You must unmount it before you modify it with Parted.

Dang. It didn’t let me. The reason was that the live CD automatically detected and used the swap partition. So I first had to turn off the swap partition:

root@bt:~# swapoff /dev/sda5

Now I tried to remove the swap partition again with parted:

(parted) rm 2
(parted) print
Disk /dev/sda: 2088cyl
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 2088,255,63.  Each cylinder is 8225kB.
Partition Table: msdos

Number  Start  End     Size    Type     File system  Flags

 1      0cyl   993cyl  993cyl  primary  ext4         boot

Ok, that did the trick. Now I could recreate the swap partition at the end of the drive with the command mkpart:

(parted) mkpart extended -51 -1
(parted) mkpart logical linux-swap -51 -1
(parted) print                                                            
Disk /dev/sda: 2088cyl
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 2088,255,63.  Each cylinder is 8225kB.
Partition Table: msdos

Number  Start    End      Size    Type      File system     Flags
 1      0cyl     993cyl   993cyl  primary   ext4            boot
 2      2037cyl  2088cyl  50cyl   extended                 
 5      2037cyl  2088cyl  50cyl   logical   linux-swap(v1)

I created a logical partition inside an extended partition again. A single second primary partition would have done as well, since we would not exceed the limit of four primary partitions. But I recreated the partitions as they were. It makes no difference. Here you can also see why I switched the units to cylinders. Since I knew the partition was exactly 50 cylinders big, I only needed to specify -51 for the start and -1 for the end. No other calculations were required. Negative numbers are counted backwards from the end of the disc. That’s a neat feature of parted.

The new swap partition also needs to be activated:

root@bt:~# mkswap /dev/sda5
Setting up swapspace version 1, size = 413692 KiB
no label, UUID=57eb8574-d471-42e6-8bf4-5ba2ef0bbc0f

Now I was finally ready to enlarge my file system partition. I tried with the command resize, but parted was again being stubborn and didn’t execute the operation because the file system had some features enabled that it was not compatible with.

WARNING: you are attempting to use parted to operate on (move) a file system.
parted's file system manipulation code is not as robust as what you'll find in
dedicated, file-system-specific packages like e2fsprogs.  We recommend
you use parted only to manipulate partition tables, whenever possible.
Support for performing most operations on most types of file systems
will be removed in an upcoming release.
Error: File system has an incompatible feature enabled.  Compatible features are
has_journal, dir_index, filetype, sparse_super and large_file.  Use tune2fs or
debugfs to remove features.

This is somewhat silly of parted, since I only wanted to resize the partition table entries and not the file system as well. I would do that with resize2fs myself later on. Maybe there is a trick to make parted to do it. But I didn’t know any better then to delete the first partition and recreate it at the exact same location. Just bigger.

This is a somewhat scary thought. But the actual data inside the partition is going to be unchanged by the process. We are only manipulating partition table entries here. Up to now we did not do anything really dangerous. So this is the point of no return. Make your backup now or forever hold your peace.

To be really sure that I would place the new partition at the exact same position I switched the displayed units to chs (cylinder/head/sector).

(parted) unit chs
(parted) print
Disk /dev/sda: 2088,170,1
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 2088,255,63.  Each cylinder is 8225kB.
Partition Table: msdos

Number  Start        End          Type      File system     Flags
 1      0,32,32      933,254,62  primary   ext4            boot
 2      2037,171,54  2088,170,1   extended
 5      2037,204,23  2088,170,1   logical   linux-swap(v1)

So maybe that was not a bad thing. It tells me that the partition starts at 0,32,32. If I would have created the new partition just by specifying the cylinder, it might have created the new partition at 0,0,0. I guess that would have been the death of my file system (but I didn’t really try that to say for sure). Anyway, specifying the position by chs is certainly not a bad idea.

Now I removed the partition and created a new one from 0,32,32 – 2036,254,62. The end of the partition is the last sector of cylinder 2036 which is the one before the one where the swap partition is on:

(parted) rm 1
(parted) mkpart primary ext4 0,32,32 2036,254,6
(parted) print                                                            
Disk /dev/sda: 2088,170,1
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 2088,255,63.  Each cylinder is 8225kB.
Partition Table: msdos

Number  Start        End          Type      File system     Flags
 1      0,32,32      2036,254,62  primary   ext4           
 2      2037,171,54  2088,170,1   extended
 5      2037,204,23  2088,170,1   logical   linux-swap(v1)

Almost done. Let’s not forget to make the partition bootable:

(parted) toggle 1 boot
(parted) print                                                            
Disk /dev/sda: 2088,170,1
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 2088,255,63.  Each cylinder is 8225kB.
Partition Table: msdos

Number  Start       End          Type      File system  Flags
 1      0,32,32     2036,254,62  primary   ext4         boot
 2      2037,171,54  2088,170,1   extended
 5      2037,204,23  2088,170,1   logical   linux-swap(v1)

Another problem I had was that the partitions in /etc/fstab were identified by their UUID and not by the device file. You can check your drive’s UUID with blkid:

root@bt:~# blkid
/dev/sda1: UUID="f9ae2c40-1edd-47c0-9ae7-11c4c08dcf50" TYPE="ext4"
/dev/sda5: UUID="57eb8574-d471-42e6-8bf4-5ba2ef0bbc0f" TYPE="swap"

And the content of /etc/fstab was:

# / was on /dev/sda1 during installation
UUID=f9ae2c40-1edd-47c0-9ae7-11c4c08dcf50 /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=91eaafab-b4e2-4821-90d7-2b3ef093bdcf none            swap    sw              0       0

So I had to adjust the UUID of the swap partition in /etc/fstab.

Step 3: Enlarging the File System

I powered up the VM and after a couple seconds it booted into linux from the virtual drive again. YEAAH! I only realized afterwards that I was holding my breath during the whole reboot.

Since the partition has now the right size, we can resize the contained file system. I guess we could have also done that while I was still working from the live CD. But resize2fs also let’s you resize a mounted file system. Calling resize2fs with only the device makes the file system as big as possible:

root@bt:~# resize2fs /dev/sda1

Now let’s check the size of the file system:

root@bt:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              16G  7.0G  7.7G  48% /

Mission accomplished!