==> Thursday, August 1, 2013 <==

Cross Compile Mesa 9.1.5 without X11




Recently I try to add opengl support for our embeded project, at least software emulated support. So I try to cross compile the mesa3d library. There are tons errors about X11 related stuffs missing, but we do not need the X11 support! After a long and difficult journey, I found the way to compile it without X11 and with framebuffer only.

You must enable the dri options, otherwise it will report

/mesa/lib/libGLESv2.so: undefined reference to `_glapi_Dispatch' or something else. 
Because the glapi will not compile without dri. 
 
See references:
 
https://bugs.freedesktop.org/show_bug.cgi?id=61750

The compile commands :

export TOOLCHAIN_TARGET_SYSTEM=arm-none-linux-gnueabi
export  TOOLCHAIN_INSTALL_DIRECTORY=/opt/toolchain

./configure CPPFLAGS=-DMESA_EGL_NO_X11_HEADERS CFLAGS=-DMESA_EGL_NO_X11_HEADERS CC=$TOOLCHAIN_TARGET_SYSTEM-gcc CXX=$TOOLCHAIN_TARGET_SYSTEM-g++ --build=$TOOLCHAIN_BUILD_SYSTEM --target=$TOOLCHAIN_TARGET_SYSTEM --host=$TOOLCHAIN_TARGET_SYSTEM --prefix=$TOOLCHAIN_INSTALL_DIRECTORY --enable-opengl --enable-gles2 --enable-gles1 --disable-glx --enable-egl --enable-gallium-egl --enable-dri --with-dri-drivers=swrast --with-gallium-drivers=swrast --with-egl-platforms=fbdev --disable-xorg --disable-xa --disable-xlib-glx

Commands Explanation:

If without MESA_EGL_NO_X11_HEADERS defined, there will cause compile errors because without X11 headers, and we do not have any other better options to disable the X11 using, see "include/EGL/eglplatform.h". So we just defined the macro to avaid this suitation.












Cross Compile ICU 51.2




The IBM ICU library can not simply invoke configure to finish the cross-compilation, we should do some steps to process it.

We assume you have already downloaded and unpacked ICU 51.2 to "/opt/icu"

1. Compile icu for current operating system
ICU need some stuffs from ICU which compiled for current operatoin system, so we satisfy it's requirement.

First, we make an icu source copy :

cp -rfd /opt/icu /opt/icu_prebuild

Secondary, we compile it for current opearting system :

cd /opt/icu_prebuild/source
./configure
make

2. We could now cross compile our ICU library now!

export TOOLCHAIN_TARGET_SYSTEM=arm-none-linux-gnueabi
export TOOLCHAIN_INSTALL_DIRECTORY=/opt/toolchain

cd /opt/icu/source
./configure CC=$TOOLCHAIN_TARGET_SYSTEM-gcc CXX=$TOOLCHAIN_TARGET_SYSTEM-g++ CPP=$TOOLCHAIN_TARGET_SYSTEM-cpp --host=$TOOLCHAIN_TARGET_SYSTEM --prefix=$TOOLCHAIN_INSTALL_DIRECTORY --enable-shared=yes --enable-tests=no --enable-samples=no --with-cross-build=/opt/icu_prebuild/source
make

NOTICE: Do not try to compile the static version of ICU, seems that would cause some errors.




==> Friday, July 19, 2013 <==

Construct Your Cross-Compilation Toolchain




1 Introduction

In normally, if you want to development for an embedded-system with ARM cpu, you need a cross-compilation toolchain. There have three ways to obtain the toolchain :
The way 1 maybe the best choice if it fit for your requirement, because it has been well tested and stable enough for your development on specific board.
The way 2 sometimes may not fit for your kernel or source code. It could successfully finished your compilation that does not means it do not have any problems. This choice is for whom do not have much time to compile their toolchain and their probject must start immediately.
The way 3 maybe a better way to suit for your requirement, you could use the stable “linaro” version GCC, that version sometimes more stable than upstream. With this way you could use any version GCC you want and compile the toolchain with specific kernel source without any compatibility problems. Certainly this way do not have much test on your board, that’s the risk you have to afford.
How we compile our cross-compilation toolchain? You could compile the toolchain’s libraries, utilities one by one, but we introduce an easier way to achieve that: crosstool-ng.

2 Preparation

Assume we are using Ubuntu 12.04
WARNING : Do not use “root” to start your crosstool-ng build, it would complaint that you are the root and will not start !
Assume we using user named “bob”
WARNING :Your PC preferably at least 1GB free memory space and 20GB free disk space, otherwise the build process maybe break. I have been kicked by this
First we prepare the crosstool-ng running environment, we must have these utilities installed:
aptitude install bison flex texinfo automake libtool patch gcj-jre gcj-jdk gawk
We download crosstool-ng from http://crosstool-ng.org and decompress to home directory, now the decompressed crosstool-ng source code directory is “/home/bob/crosstool-ng-1.18.0”.
Enter the crosstool-ng source code directory, and execute these commands :
./configure -prefix=~/crosstool-ng
make 
make install
After that, the crosstool-ng will be installed into directory “/home/bob/crosstool-ng”.

3 Start our journey!

3.1 Initialize default configuration

Enter samples directory in crosstool-ng source code directory :
cd ~/crosstool-ng-1.18.0/samples
Find out which toolchain you want to compile, we assume “arm-unknown-linux-gnueabi”.
Enter the directory :
cd arm-unknown-linux-gnueabi
You will see there have two files:
crosstool.config
reported.by
The crosstool.config is what we want! Copy it to the installed crosstool-ng directory and renamed to “.config” :
cp crosstool.config ~/crosstool-ng/.config
Ok, now we have the default configuration for our toolchains, those parameters in the configuration are been tested, so we need not to try our configuration again.

3.2 Modify configuration to fit for our requirement

Now we should do little modification to make the compiled toolchain fit for our requirement.
Enter the installed crosstool-ng directory :
cd ~/crosstool-ng
./ct-ng menuconfig
Go to option : Operating System ---> Linux kernel version, You could specific your kernel version, if you have a custom modified kernel, you have to set it to “custom tarball or directory” and specific the custom directory to your kernel source tree directory, for example we specific it to “/home/bob/linux-2.6.35.3”.
WARNING: Do not try to change the glibc version or any other libraries version, because that may be break the compile process due to different dependence. The default configuration been tested, but not any libraries’ version be tested.
Go to option : C compiler, enable “Show Linaro versions”, the linaro GCC versions are much stable , you have better use that one.
You have better disable Fortran and Java compilation, they will cause much complications.
You have to check other options to fit for you embedded board .

3.3 build

After all configuration done, we now go for our long trip :
./ct-ng build
All needed packages will be download to “~/src”, so you need to keep connection to internet.
But during compile process, may be have some issues:
Sometimes ct-ng will download some packages with *.tar.lzma, but the ct-ng can’t recognize those format and failed to unpack lead build process stopped, so you need to download the *.tar.gz of specific package and put it into ~/src. For example :
If build faild to unpack expat-2.1.0.tar.lzma, you will see there have a file named “expat-2.1.0.tar.lzma” in ~/src, so you have to download expat-2.1.0.tar.gz manuall from expat’s offical website and put it into ~/src and delete the expat-2.1.0.tar.lzma.
After success builded, the whole toolchain will be put into ~/x-tools, that’s what you want.




==> Wednesday, June 24, 2009 <==

The "standard" C library?




for a long time, each C compiler provided a "standard" library, but we know all the "standard" C library just a joke, they are not so "standard" actually. especially in cross-platform development, depend on the difference in compiler's design and the point of view, each compiler will add to or removed some behavious in the C library, just as type different : socklen_t defined in linux but windows not, SOCKET_ERROR defined in windows but linux not, even different headers with a same function, that lead us to have to increase expenditure : the sucking things of #ifdef LINUX ... #elif WIN32 ... #elif MACOX ... #endif goes everywhere in our codes just take a compromise of them.

so i always search for a cross-platform base library seems like the "standard" C library, it must have some characteristic list below:

(1). it must base on the current compiler's C library.

(2). it must have the same types and the same api similar to the "standard" C library, and needn't to change you code when you using different compiler and in different operating system.

(3). it must compatible with the types and apis in C library. ( for example : you could use socket() to create a socket handle or do the same with compatible api compatible_socket(), you could operate the handle by send() or by compatible_send(), it should invoke without difficulty on local C library api or compatible apis. )

(4). all api's behavious predictable, execute situation the same on different compiler and operating system.

(5). it must dependence less except the C library from compilers.

may be someone will ask why not try the GLib?

the GLib is a good choice for C, and it's an excellent C utilities library, but it's not the thing i searching for. first, it implement all things independent of C library, though it could convert some basic C type to the types it using( just as FILE ), what's not my willing to do. second, it toooooo GNU's, it must compile in the unix like environment with pkg-config ( ex : cygwin, msys, linux ... ), it depend on libintl, libiconv, etc ... third, in some cross-platform developments, the hardware and the environment decide it couldn't use so large a library ( you know, some compiler we have to use will compile all the library into the executable file, that greatly enlarge the last executable file size. ), and it seems so much dependences. may be it's also the reason why so much independent C package don't choice GLib as the base backend and implement their tiny compatible base layer ( ex : SDL, FreeImage, tinyxml ... ).

actually, we need at last is : a library compatible to C standard library with similar apis ( removed all platform / compiler characteristic ).

i haven't seem there have something like this yet, may be it exist but i don't know.

after a long long time waitting, i decided to implement it myself finally ...




==> Tuesday, September 30, 2008 <==

speedup your Subversion ! as fast as Git!




Do you angry with the speed of svn commit speed ? (more than ten thousand files)
Do you want your Subversion commit speed as fast as GIT and HG?
Do you have a lot of projects controling by Subversion?
Do you have many projects have more than ten thousand files?
Do you want to extent the Subvesion to a decentralized version control system?

Following these steps you could earn what you want :)

1. if you are a developer for yourself, and don't want to share your Repository and if you have been use these type Repository Access URLs to localhost : svn://, http://, http://, svn+ssh://.
now, just relocate respository to you local respository path ( ex: [windows]file:///d:/svndepot/myproject [linux]file:///path/to/repos ), that enough for this situation.

2. if you are a member of a group, and share the Repository with the others on a server:
(1) goto download SVK, and learn how to setup and use it.(2) SVK: take a mirror of server's Repository(3) SVK: create a local branch of your project(4) check your project out from the local branch with local path ( file://.... )(5) ok, now you could use the Subversion Manager Tools (ex : TortoiseSVN, RapidSVN, ect. ) to taste the commit speed! ^_^b

NOTE: after you have finish your sources modified and you want to commit all your changes in your local branch to remote server, you must use SVK to sync the mirror Repository and then push the changes (these operation will commit all changes in your local branch to the remote server ).

I have a project contains 9453 files using svn:// to access the server, each times i have to wasting 10~20 minutes just commit a little changes. Now, i just need to wait 2 seconds. Because the Manager Tools just access the Repository from Harddisk not the Network ^_^! When you satisfy what all your changes to your local branch, you just need to push them (with merge) to the server, SVK will take fast upload operation^o^.

Now, the Subversion as fast as GIT and become a decentralized version control system ^O^ ....