Archive:Building restricted drivers for XBMCbuntu: Difference between revisions

From Official Kodi Wiki
Jump to navigation Jump to search
>L.capriotti
No edit summary
m (Karellen moved page Building restricted drivers for XBMCbuntu to Archive:Building restricted drivers for XBMCbuntu without leaving a redirect: Outdated)
 
(43 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== Introduction ==
{{outdated}}
The simpler approach to build and install the restricted drivers is to run the driver installer on XBMCbuntu. The installer will take care of all installations and modify the config files accordingly.


The simpler approach to build and install the restricted drivers is to run the driver installer on XBMCLive. The installer will take care of all installations and modify the config files accordingly.
However, this requires that a full build environment has to be available on the system, while the default XBMCbuntu build does not have the required packages for sake of being as much compact as possible.  


However, this requires that a full build environment has to be available on the system, while the default XBMCLive build does not have the required packages for sake of being as much compact as possible.  
As several users have experienced, a full build environment requires around 1 GB of space, and the XBMCbuntu permanent storage file can be filled up quite easily.


As several users have experienced, a full build environment requires around 1 GB of space, and the XBMCLive permanent storage file can be filled up quite easily.
The following procedures are intended to tackle this issue; it has side effects and limitations but once you have a fully working build environment it does not have major disadvantages (IMHO).


The following procedures are intended to tackle this issue; it has side effects and limitations but once you have a fully working build environment it does not have major disadvantages (IMHO).
{{greenv|'''NOTICE'''|The simplest and recommended installation method for restricted drivers is using repositories (ppa's) like X-swat, since on Kernel upgrades you wont need to worry about going through the procedures below.}}




== Building NVIDIA video drivers ==
== Building NVIDIA video drivers ==


*Method 1: build drivers on XBMCLive
* Method 1: build drivers on XBMCbuntu
 
NB. You have to have a LARGE persistent storage file for this, around 1 GB, or the process will not complete due to insufficient disk space.
 
Install the required packages:
 
$ sudo apt-get update
$ sudo apt-get install build-essential cdbs fakeroot dh-make debhelper debconf libstdc++5 dkms linux-headers-$(uname -r)
 
then cd where the installer package is saved and run it:
 
$ sh ./NVIDIA-Linux-x86-180.29-pkg1.run


*Method 2: build drivers in MIC
Update : 185.18 drivers do not install properly using this method. The restricted img must be emptied so as to not conflict. Do the following;


Assuming you have created a target in MIC for the creation of the XBMCLive image, you need to create a new target having all the necessary development tools ON THE SAME PLATFORM, so that the build target has the same system components as the XBMCLive target.
$ mkdir /home/xbmc/temp
$ sudo cp /.bootMedia/restrictedDrivers.nvidia.img /.bootMedia/restrictedDrivers.nvidia.img.backup
$ sudo mount -o loop /.bootMedia/restrictedDrivers.nvidia.img /home/xbmc/temp
$ sudo rm -Rf /home/xbmc/temp/
$ sudo umount /home/xbmc/temp


Once done, copy the driver installer script onto the target and perform the following steps in a chrooted terminal on the build target:
Reboot
 
$ sudo ./NVIDIA-Linux-x86-185.18.36-pkg1.run


sh ./NVIDIA-Linux-x86-180.29-pkg1.run --extract-only
If this does not work for some reason you can rollback the change by putting the usb stick in another machine and copying the backup back to the original filename
cd NVIDIA-Linux-x86-180.29-pkg1
mv * usr/bin
Now a few symlinks need to be created in a few library directories:


cd usr/lib
* Method 2: build drivers in MIC


create the following:
Assuming you have created a target in MIC for the creation of the XBMCbuntu image, you need to create a new target having all the necessary development tools ON THE SAME PLATFORM, so that the build target has the same system components as the XBMCbuntu target.


lrwxrwxrwx libcuda.so.1 -> libcuda.so.180.29                                       
Once done, copy the driver installer script onto the target and perform the following steps in a chrooted terminal on the build target (ignore the error from mv):
lrwxrwxrwx libGLcore.so.1 -> libGLcore.so.180.29                                   
lrwxrwxrwx libGL.so -> libGL.so.1                                                   
lrwxrwxrwx libGL.so.1 -> libGL.so.180.29                                           
lrwxrwxrwx 1 libnvidia-cfg.so -> libnvidia-cfg.so.1                                   
lrwxrwxrwx libnvidia-cfg.so.1 -> libnvidia-cfg.so.180.29                           
lrwxrwxrwx libnvidia-tls.so -> libnvidia-tls.so.1                                   
lrwxrwxrwx libnvidia-tls.so.1 -> libnvidia-tls.so.180.29                           
lrwxrwxrwx libvdpau_nvidia.so -> libvdpau_nvidia.so.1                               
lrwxrwxrwx libvdpau_nvidia.so.1 -> libvdpau_nvidia.so.180.29
lrwxrwxrwx libvdpau.so -> libvdpau.so.1
lrwxrwxrwx libvdpau.so.1 -> libvdpau.so.180.29
lrwxrwxrwx libvdpau_trace.so -> libvdpau_trace.so.1
lrwxrwxrwx libvdpau_trace.so.1 -> libvdpau_trace.so.180.29


  cd usr/lib/tls
  $ sh ./NVIDIA-Linux-x86-180.29-pkg1.run --extract-only
$ cd NVIDIA-Linux-x86-180.29-pkg1
$ mv * usr/bin
$ pushd .
Now a few symlinks need to be created in a few library directories:


create the following:
$ cd usr/lib
 
$ ln -s libcuda.so.* libcuda.so.1
$ ln -s libGLcore.so.* libGLcore.so.1
$ ln -s libGL.so.* libGL.so.1
$ ln -s libnvidia-cfg.so.* libnvidia-cfg.so.1
$ ln -s libnvidia-tls.so.* libnvidia-tls.so.1
$ ln -s libvdpau_nvidia.so.* libvdpau_nvidia.so.1
$ ln -s libvdpau.so.* libvdpau.so.1
$ ln -s libvdpau_trace.so.* libvdpau_trace.so.1
$ ln -s libcuda.so.1 libcuda.so
$ ln -s libGLcore.so.1 libGLcore.so
$ ln -s libGL.so.1 libGL.so
$ ln -s libnvidia-cfg.so.1 libnvidia-cfg.so
$ ln -s libnvidia-tls.so.1 libnvidia-tls.so
$ ln -s libvdpau_nvidia.so.1 libvdpau_nvidia.so
$ ln -s libvdpau.so.1 libvdpau.so
$ ln -s libvdpau_trace.so.1 libvdpau_trace.so
$ popd
$ pushd .
$ cd usr/lib/tls


  lrwxrwxrwx libnvidia-tls.so.1 -> libnvidia-tls.so.180.29
  $ ln -s libnvidia-tls.so.* libnvidia-tls.so.1
$ ln -s libnvidia-tls.so.1 libnvidia-tls.so


Some files need to be moved to their appropriate place for Ubuntu:
Some files need to be moved to their appropriate place for Ubuntu:


  mkdir usr/lib/xorg
  $ popd
  mv X11R6/* usr/lib/xorg
$ pushd .
$ cd usr
$ mkdir lib/xorg
  $ mv X11R6/lib/* lib/xorg


and again some symlinks are to be created:
and again some symlinks are to be created:


  cd usr/lib/xorg
  $ cd lib/xorg


  lrwxrwxrwx libXvMCNVIDIA.so -> libXvMCNVIDIA.so.1
  $ ln -s libXvMCNVIDIA.so.* libXvMCNVIDIA.so.1
  lrwxrwxrwx libXvMCNVIDIA.so.1 -> libXvMCNVIDIA.so.180.29
  $ ln -s libXvMCNVIDIA.so.1 libXvMCNVIDIA.so


  cd usr/lib/xorg/modules
  $ cd modules
   
   
  lrwxrwxrwx libnvidia-wfb.so -> libnvidia-wfb.so.1
  $ ln -s libnvidia-wfb.so.* libnvidia-wfb.so.1
  lrwxrwxrwx libnvidia-wfb.so.1 -> libnvidia-wfb.so.180.29
  $ ln -s libnvidia-wfb.so.1 libnvidia-wfb.so


  cd usr/lib/xorg/modules/extensions
  $ cd extensions


  lrwxrwxrwx libglx.so -> libglx.so.1
  $ ln -s libglx.so.* libglx.so.1
  lrwxrwxrwx libglx.so.1 -> libglx.so.180.29
  $ ln -s libglx.so.1 libglx.so


Then, the kernel module needs to be compiled:


Then, the kernel module needs to be compiled and placed in an appropriate location:
$ popd
$ pushd .
$ cd usr/src/nv
$ make SYSSRC=/usr/src/linux-headers-2.6.27-11-generic module
$ cp nvidia.ko /tmp
$ rm *.o *.ko
$ popd


  cd usr/src/nv
  $ mkdir -p lib/modules/2.6.27-11-generic/updates/dkms
make module
  $ cp /tmp/nvidia.ko lib/modules/2.6.27-11-generic/updates/dkms
mkdir ../../../lib
  $ cd ..
mkdir ../../../lib/modules
mkdir ../../../lib/modules/2.6.2x-yy-generic
mkdir ../../../lib/modules/2.6.2x-yy-generic/updates
  mkdir ../../../lib/modules/2.6.2x-yy-generic/dkms
  cp nvidia.ko ../../../lib/modules/2.6.2x-yy-generic/dkms


You can now create a new loopfile of a reasonable size (80 MB should work) with:
Create a new loopfile of a reasonable size (80 MB should work) with:


  dd if=/dev/zero of=restrictedDrivers.nvidia.img bs=1M count=80
  $ dd if=/dev/zero of=restrictedDrivers.nvidia.img bs=1M count=80
  mkfs.ext3 restrictedDrivers.nvidia.img -F
  $ mkfs.ext3 restrictedDrivers.nvidia.img -F


and populate it by mounting the image file and copying all the files above:
and populate it by mounting the image file and copying all the files above:


  mkdir Image
  $ mkdir Image
  mount -o loop restrictedDrivers.nvidia.img Image
  $ mount -o loop restrictedDrivers.nvidia.img Image
  cp -RP NVIDIA-Linux-x86-180.29-pkg1/* Image
  $ cp -RP NVIDIA-Linux-x86-180.29-pkg1/* Image
  umount Image
  $ umount Image
 
 
There is however a final step, ie. having the new module automatically loaded. For this we need to have an updated modules.dep inside the restrictedDrivers.nvidia.img. My way of doing it is to boot XBMCbuntu with NVIDIA drivers in safe mode, run a "sudo depmod -a" and copy the resulting /lib/modules/2.6.27-11-generic/modules.dep (and /lib/modules/2.6.27-11-generic/modules.dep.bin if there) over to the same location in build environment again, and reiterate the population of the .img file.


=== Installing driver from repositories ===


There is however a final step, ie. having the new module automatically loaded. For this we need to have an updated modules.dep inside the restrictedDrivers.nvidia.img. My way of doing it is to boot XBMCLive with NVIDIA drivers in safe mode, run a "sudo depmod -a" and copy the resulting /lib/modules/2.6.2X-yy-generic/modules.dep over to the same location in build environment again, and reiterate the population of the .img file.
Certainly less involved and is up-to-date with all Ubuntu variants that are still supported, If you have a EOL Ubuntu or variant repositories may not have the driver your looking for.
Its a more reliable, faster to install and works well.


:see also: XBMCbuntu#Upgrading NVidia drivers in Ubuntu and variants


== Building ATI/AMD video drivers ==
== Building ATI/AMD video drivers ==


*Method 1: build drivers on XBMCLive
* Method 1: build drivers on XBMCbuntu
 
NB. You have to have a LARGE persistent storage file for this, around 1 GB, or the process will not complete due to insufficient disk space.
 
Install the required packages:
 
$ sudo apt-get update
$ sudo apt-get install build-essential cdbs fakeroot dh-make debhelper debconf libstdc++5 dkms linux-headers-$(uname -r)
 
then cd where the installer package is saved and build the driver packages:


*Method 2: build drivers in MIC
$ sh ati-driver-installer-9-1-x86.x86_64.run --buildpkg Ubuntu/hardy


Assuming you have created a target in MIC for the creation of the XBMCLive image, you need to create a new target having all the necessary development tools ON THE SAME PLATFORM, so that the build target has the same system components as the XBMCLive target.
Finally install the packages:
 
$ sudo dpkg -i xorg-driver-fglrx_8.573-0ubuntu1_i386.deb fglrx-kernel-source_8.573-0ubuntu1_i386.deb fglrx-amdcccle_8.573-0ubuntu1_i386.deb
 
and create a default initial configuration with:
 
$ sudo aticonfig --initial -f
 
 
* Method 2: build drivers in MIC
 
Assuming you have created a target in MIC for the creation of the XBMCbuntu image, you need to create a new target having all the necessary development tools ON THE SAME PLATFORM, so that the build target has the same system components as the XBMCbuntu target.


Once done, copy the driver installer script onto the target and perform the following steps in a chrooted terminal on the build target:
Once done, copy the driver installer script onto the target and perform the following steps in a chrooted terminal on the build target:


  mkdir Files
  $ mkdir Files
  sh ./ati-driver-installer-9-1-x86.x86_64.run --buildpkg Ubuntu/intrepid                                                                                         
  $ sh ./ati-driver-installer-9-1-x86.x86_64.run --buildpkg Ubuntu/intrepid                                                                                         
  dpkg-deb -x fglrx-amdcccle_8.573-0ubuntu1_i386.deb Files                                                                                                             
  $ dpkg-deb -x fglrx-amdcccle_8.573-0ubuntu1_i386.deb Files                                                                                                             
  dpkg-deb -x fglrx-kernel-source_8.573-0ubuntu1_i386.deb Files                                                                                                       
  $ dpkg-deb -x fglrx-kernel-source_8.573-0ubuntu1_i386.deb Files                                                                                                       
  dpkg-deb -x fglrx-modaliases_8.573-0ubuntu1_i386.deb Files                                                                                                           
  $ dpkg-deb -x fglrx-modaliases_8.573-0ubuntu1_i386.deb Files                                                                                                           
  dpkg-deb -x libamdxvba1_8.573-0ubuntu1_i386.deb Files                                                                                                               
  $ dpkg-deb -x libamdxvba1_8.573-0ubuntu1_i386.deb Files                                                                                                               
  dpkg-deb -x xorg-driver-fglrx_8.573-0ubuntu1_i386.deb Files                                                                                                         
  $ dpkg-deb -x xorg-driver-fglrx_8.573-0ubuntu1_i386.deb Files                                                                                                         
  dpkg-deb -x xorg-driver-fglrx-dev_8.573-0ubuntu1_i386.deb Files  
  $ dpkg-deb -x xorg-driver-fglrx-dev_8.573-0ubuntu1_i386.deb Files  


You have now all the files needed in the "Files" directory, minus the kernel module. In order to create the kernel module we are going to build it manually:
You have now all the files needed in the "Files" directory, minus the kernel module. In order to create the kernel module we are going to build it manually:


  cd ./Files                                                                                                                                                             
  $ cd ./Files                                                                                                                                                             
  pushd .
  $ pushd .
  cd usr/src/fglrx-8.573/
  $ cd usr/src/fglrx-8.573/
  make
  $ ./make.sh --uname_r 2.6.27-11-generic
  popd
$ cp 2.6.x/fglrx.ko /tmp
  mkdir lib
  $ cd 2.6.x
  mkdir modules
  $ make clean
  mkdir 2.6.27-11-generic
  $ popd
mkdir updates
  $ mkdir -p lib/modules/2.6.27-11-generic/updates/dkms
mkdir dkms
  $ cp /tmp/fglrx.ko lib/modules/2.6.27-11-generic/updates/dkms
  mv usr/src/fglrx-8.573/2.6.x/fglrx.ko lib/modules/2.6.27-11-generic/updates/dkms
$ cd ..


You can now create a new loopfile of a reasonable size (80 MB should work) with:
You can now create a new loopfile of a reasonable size (80 MB should work) with:


  dd if=/dev/zero of=restrictedDrivers.amd.img bs=1M count=80
  $ dd if=/dev/zero of=restrictedDrivers.amd.img bs=1M count=80
  mkfs.ext3 restrictedDrivers.amd.img -F
  $ mkfs.ext3 restrictedDrivers.amd.img -F


and populate it by mounting the image file and copying all the files above:
and populate it by mounting the image file and copying all the files above:


  mkdir Image
  $ mkdir Image
  mount -o loop restrictedDrivers.amd.img Image
  $ mount -o loop restrictedDrivers.amd.img Image
  cp -RP Files/* Image
  $ cp -RP Files/* Image
  umount Image
  $ umount Image
 
There is however a final step, ie. having the new module automatically loaded. For this we need to have an updated modules.dep inside the restrictedDrivers.amd.img. My way of doing it is to boot XBMCbuntu with AMD drivers in safe mode, run a "sudo depmod -a" and copy the resulting /lib/modules/2.6.2X-yy-generic/modules.dep (and /lib/modules/2.6.27-11-generic/modules.dep.bin if there) over to the same location in build environment again, and reiterate the population of the .img file.
 
== References ==


There is however a final step, ie. having the new module automatically loaded. For this we need to have an updated modules.dep inside the restrictedDrivers.amd.img. My way of doing it is to boot XBMCLive with AMD drivers in safe mode, run a "sudo depmod -a" and copy the resulting /lib/modules/2.6.2X-yy-generic/modules.dep over to the same location in build environment again, and reiterate the population of the .img file.
Ubuntu Hardy Installation Guide on the [http://wiki.cchtml.com/index.php/Ubuntu_Hardy_Installation_Guide Unofficial ATI Linux Drivers Wiki]

Latest revision as of 04:37, 20 July 2020

Time.png THIS PAGE IS OUTDATED:

This page or section has not been updated in a long time, no longer applies, refers to features that have been replaced/removed, and/or may not be reliable.

This page is only kept for historical reasons, or in case someone wants to try updating it.

The simpler approach to build and install the restricted drivers is to run the driver installer on XBMCbuntu. The installer will take care of all installations and modify the config files accordingly.

However, this requires that a full build environment has to be available on the system, while the default XBMCbuntu build does not have the required packages for sake of being as much compact as possible.

As several users have experienced, a full build environment requires around 1 GB of space, and the XBMCbuntu permanent storage file can be filled up quite easily.

The following procedures are intended to tackle this issue; it has side effects and limitations but once you have a fully working build environment it does not have major disadvantages (IMHO).

NOTICE The simplest and recommended installation method for restricted drivers is using repositories (ppa's) like X-swat, since on Kernel upgrades you wont need to worry about going through the procedures below.


Building NVIDIA video drivers

  • Method 1: build drivers on XBMCbuntu

NB. You have to have a LARGE persistent storage file for this, around 1 GB, or the process will not complete due to insufficient disk space.

Install the required packages:

$ sudo apt-get update
$ sudo apt-get install build-essential cdbs fakeroot dh-make debhelper debconf libstdc++5 dkms linux-headers-$(uname -r)

then cd where the installer package is saved and run it:

$ sh ./NVIDIA-Linux-x86-180.29-pkg1.run

Update : 185.18 drivers do not install properly using this method. The restricted img must be emptied so as to not conflict. Do the following;

$ mkdir /home/xbmc/temp
$ sudo cp /.bootMedia/restrictedDrivers.nvidia.img /.bootMedia/restrictedDrivers.nvidia.img.backup
$ sudo mount -o loop /.bootMedia/restrictedDrivers.nvidia.img /home/xbmc/temp
$ sudo rm -Rf /home/xbmc/temp/
$ sudo umount /home/xbmc/temp

Reboot

$ sudo ./NVIDIA-Linux-x86-185.18.36-pkg1.run

If this does not work for some reason you can rollback the change by putting the usb stick in another machine and copying the backup back to the original filename

  • Method 2: build drivers in MIC

Assuming you have created a target in MIC for the creation of the XBMCbuntu image, you need to create a new target having all the necessary development tools ON THE SAME PLATFORM, so that the build target has the same system components as the XBMCbuntu target.

Once done, copy the driver installer script onto the target and perform the following steps in a chrooted terminal on the build target (ignore the error from mv):

$ sh ./NVIDIA-Linux-x86-180.29-pkg1.run --extract-only
$ cd NVIDIA-Linux-x86-180.29-pkg1
$ mv * usr/bin
$ pushd .

Now a few symlinks need to be created in a few library directories:

$ cd usr/lib
$ ln -s libcuda.so.* libcuda.so.1
$ ln -s libGLcore.so.* libGLcore.so.1
$ ln -s libGL.so.* libGL.so.1
$ ln -s libnvidia-cfg.so.* libnvidia-cfg.so.1
$ ln -s libnvidia-tls.so.* libnvidia-tls.so.1
$ ln -s libvdpau_nvidia.so.* libvdpau_nvidia.so.1
$ ln -s libvdpau.so.* libvdpau.so.1
$ ln -s libvdpau_trace.so.* libvdpau_trace.so.1
$ ln -s libcuda.so.1 libcuda.so
$ ln -s libGLcore.so.1 libGLcore.so
$ ln -s libGL.so.1 libGL.so
$ ln -s libnvidia-cfg.so.1 libnvidia-cfg.so
$ ln -s libnvidia-tls.so.1 libnvidia-tls.so
$ ln -s libvdpau_nvidia.so.1 libvdpau_nvidia.so
$ ln -s libvdpau.so.1 libvdpau.so
$ ln -s libvdpau_trace.so.1 libvdpau_trace.so

$ popd
$ pushd .
$ cd usr/lib/tls
$ ln -s libnvidia-tls.so.* libnvidia-tls.so.1
$ ln -s libnvidia-tls.so.1 libnvidia-tls.so

Some files need to be moved to their appropriate place for Ubuntu:

$ popd
$ pushd .
$ cd usr
$ mkdir lib/xorg
$ mv X11R6/lib/* lib/xorg

and again some symlinks are to be created:

$ cd lib/xorg
$ ln -s libXvMCNVIDIA.so.* libXvMCNVIDIA.so.1
$ ln -s libXvMCNVIDIA.so.1 libXvMCNVIDIA.so
$ cd modules

$ ln -s libnvidia-wfb.so.* libnvidia-wfb.so.1
$ ln -s libnvidia-wfb.so.1 libnvidia-wfb.so
$ cd extensions
$ ln -s libglx.so.* libglx.so.1
$ ln -s libglx.so.1 libglx.so

Then, the kernel module needs to be compiled:

$ popd
$ pushd .
$ cd usr/src/nv
$ make SYSSRC=/usr/src/linux-headers-2.6.27-11-generic module
$ cp nvidia.ko /tmp
$ rm *.o *.ko
$ popd
$ mkdir -p lib/modules/2.6.27-11-generic/updates/dkms
$ cp /tmp/nvidia.ko lib/modules/2.6.27-11-generic/updates/dkms
$ cd ..

Create a new loopfile of a reasonable size (80 MB should work) with:

$ dd if=/dev/zero of=restrictedDrivers.nvidia.img bs=1M count=80
$ mkfs.ext3 restrictedDrivers.nvidia.img -F

and populate it by mounting the image file and copying all the files above:

$ mkdir Image
$ mount -o loop restrictedDrivers.nvidia.img Image
$ cp -RP NVIDIA-Linux-x86-180.29-pkg1/* Image
$ umount Image


There is however a final step, ie. having the new module automatically loaded. For this we need to have an updated modules.dep inside the restrictedDrivers.nvidia.img. My way of doing it is to boot XBMCbuntu with NVIDIA drivers in safe mode, run a "sudo depmod -a" and copy the resulting /lib/modules/2.6.27-11-generic/modules.dep (and /lib/modules/2.6.27-11-generic/modules.dep.bin if there) over to the same location in build environment again, and reiterate the population of the .img file.

Installing driver from repositories

Certainly less involved and is up-to-date with all Ubuntu variants that are still supported, If you have a EOL Ubuntu or variant repositories may not have the driver your looking for. Its a more reliable, faster to install and works well.

see also: XBMCbuntu#Upgrading NVidia drivers in Ubuntu and variants

Building ATI/AMD video drivers

  • Method 1: build drivers on XBMCbuntu

NB. You have to have a LARGE persistent storage file for this, around 1 GB, or the process will not complete due to insufficient disk space.

Install the required packages:

$ sudo apt-get update
$ sudo apt-get install build-essential cdbs fakeroot dh-make debhelper debconf libstdc++5 dkms linux-headers-$(uname -r)

then cd where the installer package is saved and build the driver packages:

$ sh ati-driver-installer-9-1-x86.x86_64.run --buildpkg Ubuntu/hardy 

Finally install the packages:

$ sudo dpkg -i xorg-driver-fglrx_8.573-0ubuntu1_i386.deb fglrx-kernel-source_8.573-0ubuntu1_i386.deb fglrx-amdcccle_8.573-0ubuntu1_i386.deb 

and create a default initial configuration with:

$ sudo aticonfig --initial -f


  • Method 2: build drivers in MIC

Assuming you have created a target in MIC for the creation of the XBMCbuntu image, you need to create a new target having all the necessary development tools ON THE SAME PLATFORM, so that the build target has the same system components as the XBMCbuntu target.

Once done, copy the driver installer script onto the target and perform the following steps in a chrooted terminal on the build target:

$ mkdir Files
$ sh ./ati-driver-installer-9-1-x86.x86_64.run --buildpkg Ubuntu/intrepid                                                                                        
$ dpkg-deb -x fglrx-amdcccle_8.573-0ubuntu1_i386.deb Files                                                                                                            
$ dpkg-deb -x fglrx-kernel-source_8.573-0ubuntu1_i386.deb Files                                                                                                       
$ dpkg-deb -x fglrx-modaliases_8.573-0ubuntu1_i386.deb Files                                                                                                          
$ dpkg-deb -x libamdxvba1_8.573-0ubuntu1_i386.deb Files                                                                                                               
$ dpkg-deb -x xorg-driver-fglrx_8.573-0ubuntu1_i386.deb Files                                                                                                         
$ dpkg-deb -x xorg-driver-fglrx-dev_8.573-0ubuntu1_i386.deb Files 

You have now all the files needed in the "Files" directory, minus the kernel module. In order to create the kernel module we are going to build it manually:

$ cd ./Files                                                                                                                                                            
$ pushd .
$ cd usr/src/fglrx-8.573/
$ ./make.sh --uname_r 2.6.27-11-generic
$ cp 2.6.x/fglrx.ko /tmp
$ cd 2.6.x
$ make clean
$ popd
$ mkdir -p lib/modules/2.6.27-11-generic/updates/dkms
$ cp /tmp/fglrx.ko lib/modules/2.6.27-11-generic/updates/dkms
$ cd ..

You can now create a new loopfile of a reasonable size (80 MB should work) with:

$ dd if=/dev/zero of=restrictedDrivers.amd.img bs=1M count=80
$ mkfs.ext3 restrictedDrivers.amd.img -F

and populate it by mounting the image file and copying all the files above:

$ mkdir Image
$ mount -o loop restrictedDrivers.amd.img Image
$ cp -RP Files/* Image
$ umount Image

There is however a final step, ie. having the new module automatically loaded. For this we need to have an updated modules.dep inside the restrictedDrivers.amd.img. My way of doing it is to boot XBMCbuntu with AMD drivers in safe mode, run a "sudo depmod -a" and copy the resulting /lib/modules/2.6.2X-yy-generic/modules.dep (and /lib/modules/2.6.27-11-generic/modules.dep.bin if there) over to the same location in build environment again, and reiterate the population of the .img file.

References

Ubuntu Hardy Installation Guide on the Unofficial ATI Linux Drivers Wiki