GCCSDK crosscompiling examples?
Pages: 1 2
Tristan M. (2946) 1036 posts |
Thanks, Chris!
Hmm. I wonder…
That’s odd. Really odd. |
David Pitt (102) 743 posts |
It certainly does, many thanks indeed. !PDF is now good on my Raspberry Pi 3. |
Ronald May (387) 407 posts |
Syntax error: word unexpected (expecting “)”) You can get this error when the linux machine tries to run a binary that has been built for RISCOS. I dont know why your cross-compiler would be doing this while retrieving/decompressing an archive, are you using standard autobuilder methods? |
Tristan M. (2946) 1036 posts |
Yep. Not doing anything too strange. I just spent a minute shoving it over to RISC OS to try bzip2. Would you believe it’s a RISC OS ELF executable? Everything else in /env is x64 Linux executable. I’ve also tried renaming bzip2, doing a build clean and build-world. bzip2 doesn’t come back. I also tried hitting svn for gcc and autobuilder in case a file was messed up. Nope. I’m thinking I’ll just purge the whole tree and restart from scratch unless someone has a suggestion. |
Ronald May (387) 407 posts |
Tristan, What distribution have you installed the SDK on? Something is really wrong with your various paths. |
Tristan M. (2946) 1036 posts |
I swear this message is cursed. I think it’s at least the sixth attempt at replying. I get interrupted every time somehow. First time round and this time I followed the installation instructions to the letter. To confirm I wasn’t going mad I built the same “Hello World” source file from within and outside the GCCSDK tree. I got an RISC OS ELF, and a Linux am64 ELF respectively. My honest suspicion is there’s an autobuilder script in /autobuilder somewhere that’s a bit wobbly. The PC I have been using is running lubuntu 16.04 am64. !PDF built fine for me after having to build some pre-requisites. It also works perfectly which is great. I don’t like having to turn on this PC which burns hundreds of watts to read reference material. I’ve been trying to find a nice way of feeding some useful projects to the autobuilder. Some aren’t a concern because they are up on GitHub or whatever, but others aren’t. They are sitting on other sites. Mostly they are essentially abandoned projects so updates don’t happen. Downloading every time seems kind of pointless. |
Ronald May (387) 407 posts |
The PC I have been using is running lubuntu 16.04 am64. I think most users are using the ubuntu/debian variants, so you should be fine there. |
Tristan M. (2946) 1036 posts |
Ronald, again sorry for the late reply. No time for this sort of thing recently unfortunately. I had a chance to play a bit more last night, but unfortunately it was really late so it’s a bit hazy. Autobuilding one of the packages caused it to try to run a ,e1f While it’s true the downloaded files are still there, it still re-downloads them every single time. Using the -D option just adds an increment suffix to the new download. I tend to have a lot of terminals open with various working directories when I’m doing this. Saves a lot of messing around. Especially because each one can have different environment variables set. |
Ronald May (387) 407 posts |
Autobuilding one of the packages caused it to try to run a ,e1f I take it you are making your own autobuilder package rather than using the existing bash one. |
Tristan M. (2946) 1036 posts |
No, I’ve been using the bash autobuild. I’d say roughly 1/3 of the things in the autobuild are broken currently. Some only take minor changes and others not so much. I have been working on porting a couple of other things outside the autobuild too though. Going through other things in the autobuilder tree is a good way to work some things out I agree. |
Theo Markettos (89) 919 posts |
FWIW you can see the current state of build-ability here: Lots of packages are broken because they’re essentially a moving target: because we build somebody else’s sources (eg from Debian), if upstream change their code our patches don’t magically change to suit. It’s quite a bit of work to keep on top of all the failures, and it’s only recently that we’ve had infrastructure to build regularly. Patches are, of course, welcome, and feel free to ask on the list if there’s something you’re having trouble with. bash for RISC OS is unfortunately a pretty difficult customer: not just building it, but running it. It’s expecting a process model quite different from RISC OS and, while UnixLib does its best, it’s going to struggle at the best of times. Things like ‘running a backgrounded process’ is something RISC OS can’t do – unless you re-implement the Unix process model using TaskWindows. |
Tristan M. (2946) 1036 posts |
That’s a really useful link. It’s a given that packages will keep breaking because of updates. How does one go about submitting patches? |
Theo Markettos (89) 919 posts |
Those are built with the current head of gccsdk, however some builds might be failing for other reasons. Also the most recent full-build was March: I have some faster hardware to run them more frequently, but it’s not deployed yet. To submit patches, please just mail them to the GCCSDK mailing list. |
Ronald May (387) 407 posts |
Going into a job, build number and console output might be useful in finding out what it actually did and how your build might be different in some way. Form the console option list, there doesn’t appear to be any last failure/last success files, and another thing is it possible that it has failed to build because the source was temporarily unavailable? I’m just trying to find the cause of 4 out of 6 failures. |
Theo Markettos (89) 919 posts |
Jenkins’ organisation of the web interface takes a little getting used to. There are several pathways: < jobname> (displays most recent successful artifacts) < jobname> → build number (artifacts for that build displayed) < jobname> → build number → console log If you’re looking at the log for a particular build number, you need to go up one level to see the artifacts (files it produced). It is possible indeed likely that jobs failed because the source wasn’t available. Most packages come from Debian (who have reliable source code repositories) but others come from a random tarball or the author’s repository, which do move or break occasionally. In terms of pthreads, UnixLib sets up a callback on a timer. The callback switches thread context and then sets up another callback for the next tick. This works surprisingly well given it’s in third party code, but it isn’t OS wide and isn’t available if you aren’t linking with UnixLib. Other OSes have both a process model (where processes are spawn, executed and given means to communicate) and a threading model (where processes can create and destroy threads) – typically the unit of scheduling is a thread not a process. In the context of the wifi discussion we’re talking about kernel threads, where UnixLib won’t help, but userland threads are most useful when something else is also scheduling the process as well as the threads. TaskWindow is kind of the de-facto process scheduler, but it’s very limited compared with process handling on other systems. |
Tristan M. (2946) 1036 posts |
I may be wrong but I think I read about the UnixLib implementation of pthreads Here. It’d be “fun” to get stuck into the whole kernel pre-emptive multitasking thing, but I’m not going to pay for the compiler for the privilege. Sorry. If there was the Pi specific version offered without the rest of the contents of the NutPi package and as a download I’d consider it. At least UnixLib is still built with gcc. But I don’t think much can really be done to improve on the model it uses for pthreads without breaking other things. There’s a few threads for this anyway. Last night I got Code::Blocks set up so I can work on the program I’m trying to port. It’s GraFX2 by the way. I just didn’t say before because I didn’t want to get anyone’s hopes up. I’ve gotten further but I’m stuck on needing to do something about file loading / saving dialogs. No matter. At least I’ve got a nice IDE to work with now which will speed things up a bit. I set it up for GCCSDK and to use ro-make for the Makefile. Hardly rocket science but it’s better than doing all the searching and editing manually. e: should have been clearer. It creates native dialogs within the SDL app. It’s just the handling code for the filesystem needs some work. I know I saw a package in autobuilder that went and patched all the instances of the *nix fs to work with RISC OS but I can’t find it. |
Tristan M. (2946) 1036 posts |
This thread is still my main reference because it has information not on the website. This is a brief note mostly for myself, and any others who may care. Don’t use -j4 when building the RISC OS native toolchain on something with 1GB RAM. Each thread takes 250MB give or take. I’m building it on an aarch64 device. Put simply it didn’t work out very well. |
Tristan M. (2946) 1036 posts |
It’s my thread. I can be a thread necromancer if I want to. I was just trying to do something a little more exotic with GCCSDK and bumped into something. I realised that ro-path only sets some of the environment. How do I set it up temporarily so everything in the gccsdk/cross/bin are the defaults? It looks like I may need to go nuts with symlinks in /env, but then what? A bit of an aside but so far I’ve successfully built GCCSDK in Linux (Ubuntu or Debian) on x86, x86-64, aarch32, and aarch64. The last two take some fiddling. |
Theo Markettos (89) 919 posts |
You can ‘source’ it in your shell, to get your shell to remember the settings as long as it’s open. Or you can source it in your .bash_profile to make it permanent. However I’d suggest not doing that, because quite often builds involve building tools for the build machine (Linux x86) which are then run and generate output which is then cross-compiled for the target machine (RISC OS ARM). If you start overloading CC and such then you’ll discover the first step builds RISC OS binaries that the second step can’t run on the x86 machine. You’ll likely end up breaking building things for Linux if you source the RISC OS paths at times except when they’re needed. By the way, a few months ago I wrote some more notes on using the cross compiler: |
Tristan M. (2946) 1036 posts |
I need some help. Something’s not right. I tried out GCCSDK on both aarch64 and x86-64 which I have them built on but something’s amiss. I’m just using “source ro-path” to set the environment. It doesn’t seem to be finding headers that it should be. While trying to crosscompile something it fell over trying to build something that uses functions from wchar.h which appears to be present. So I grabbed some really simple example source from the web and tried building it both natively and via GCCSDK. Native worked, GCCSDK failed. I guess the question is what am I doing wrong? |
Pages: 1 2