RPCEmu On RPi 3B in Raspbian.
|
I gave this a try for a while last night and this morning, it works, it is usable though a bit on the slow side (averaging about 12 MIPS). My goal was to be able to switch back and forth between ROL RISC OS 4.02 and ROOL RISC OS 5.22+, and get rid of some of the HW incompatibility of the ARMv8. For this RPCEmu does work, though it is to sluggish for extended use. Maybe once someone gets a dynamic recompiler (JIT Emulator) CPU core with the ARM as the back end for RPCEmu it will become usable enough for day to day use, maybe. Though out of curriousity I dug out an x86 box I have in a closit, and ran RPCEmu JIT on that it ran at about 16MIPS average, so almost as slow as on the RPi3B and the RPi3B is only about twice as powerful as the x86 box I was using. |
|
It might be simpler to have somebody clever switch the ARM to unaligned loads abort, then fake the old behaviour. While they’re at it they can fake SWP. It won’t be atomic but it’s better than crashing… |
|
Mostley sounds good to me. Though I must admit that I was using SWP for my multicore experiments on the RPi2 (yes I know it was already listed as depricated). What ever happened to the ARM that was almost always the fastest CPU available in any architecture? The ARM made by people that were not trying to add a bunch of unneeded cow pies nor remove the good stuff. |
|
That ceased to be the case in the mid 90s when Windows 3.1 took the world by storm and the world realised that it needed better than a 386SX to get stuff done. I don’t know how old you are, but there was a time when x86 silicon would practically double clock speeds every month. It was insane with magazines as the parts they’d talk about would be obsolete by publication date, and there were some REALLY crappy motherboards because the processors were coming out faster than the motherboards to put them on. For some obscure reason, it seemed like each new incarnation used a different socket, so you could rarely just update the processor. The common word seems to be that The Internet killed Phoebe, but personally I think that’s just some convenient fertiliser. Truth is, Phoebe, no matter how good it is, couldn’t hold a candle to the capabilities of a cheap box-shifter PC. This means you’d basically be selling to the enthusiasts. That’s not to say it couldn’t be done (after all, who bought the Iyonix?) but maybe Acorn’s financial situation at that time required more? As for the “fastest CPU”, this depends on what you want to achieve. The main “device” processors are ARM, x86, and MIPS. There’s some other stuff (PowerPC, souped up Z80 clones…) kicking around, but mostly you’ll come across those three. Each with their positives and negatives. ARM, for instance, has never made much impact on the desktop. The x86 rules the lowlands and the highlands there. Why? Because desktop machines offer a meaty power supply to allow you to toss loads of juice into an x86. There is quite a lot of grunt available in a modern x86 processor. It’s almost a shame it is hobbled by… well, by virtue of being an x86. By converse, x86 has not made much of a dent in the mobile world. Why? Because ARM devices are ridiculously power efficient. There’s the infamous story of the original ARM box that kept failing a power cycle reset, that was eventually traced back to the inertia of the fan on the box backfeeding enough power to keep the processor running. There’s also the story that Acorn computers could not be awarded “Eco” labels because they had no special energy saving mode. They were already more efficient running flat out. The result? What you hold in your hands every time you fondle your phone to see if anybody has posted pointless status updates…
I still mourn the loss of But, it needed to be sacrificed (boom! headshot) in order that we be capable of addressing more than 64MiB, which was a phenomenally huge amount in the mid ’80s, but comically restrictive a decade later. Today’s cow pies? 64 bit. The ARM originally kicked ass over the x86 because the x86 was primarily a 16 bit processor. I say “primarily” because anybody who has ever dealt with the DOS “memory models” and the associated therapy will know that it was obfuscated to the point of almost being an April Fools that somebody managed to get taken seriously. The ARM dealt with 32 bit values, which gave it an advantage, it could process twice as much raw data not by virtue of clock speed but by virtue of data width. To see this in more detail, look up some 6502 code for multiplying two 32 bit values, and realise that the ARM can do the grunt work of multiplication in one instruction. Thing is, luckily for us, while only a few processors supported ARM26 (once Acorn went, so did 26 bit compatibility), ARM32 is widespread and well known, so I think ARM64 will not replace it so much as work alongside it. We have processors, such as the ARMv8 family, that mostly run ARM32 code and can switch between 32 and 64 bit, plus having a 64 bit OS host 32 bit applications. There may be a day when ARM decides to drop that additional complexity and make a pure 64 bit only design. However 32 bit will not cease, it’ll just not be available on that processor… |
|
Nice rant. You should go to work advertising for Intel, you seem to be good at it. I came up with the Personal Computer, my first 3 being 8-bit based on the i8080, z80, and 6502 respectively, and they were high end systems of the time. So the 80486, Pentium, Pentium II, MC68030, MC68040, MC68060, etc were 16 bit processors according to you? The ARM held top place in performance at the time of each release up until the mid 1990’s. That is hardly what you are portraying. The ARM was running at over 150MIPS real world while the Intel IA32 (x86 32-bit as it was at the time) could push 100MIPS just barely, and the Motorola CPU’s were falling behind. AIM did not come until later, after the ARM was no longer top dog (AIM=Apple IBM and Motorola, the alience that created the PowerPC coffee warmer). Personaly I think that it was the high performance of the early ARM that prompted the other companies to try there hand at RISC archetechures (AIM PowerPC, DEC Alpha, and a lot of new playing with MIPS) in the early to mid 1990’s. Yes I was around to see all of this groth, and chips explode do to overheating before heatsinks became standard. The DEC Alpha was the first single IC end user targeted CPU to require a heatsink do to the heat it produced. ARM did hold the top dog position for a long time Some instruction set extensions make good since, others are just there so they can say that they added this, or this older way is no longer needed, even though the new way has no real advantage over the old way, and is sometimes a dissadvantage. I understand and support the advancement of the ARM, VFP is great, I have not yet played with NEON. The extended MMU that allowed mapping from way more than 4GB on the 32-bit ARMv6 and ARMv7 CPU’s that support it are great. The non-ARM advances in the technology of RISC OS computers are also great, like the VC4 (if they would let out enough info to completely program for it). Though ARM was designed as a desktop CPU. The low power consuption accedently propelled it into the embeded market, even while it was still more powerful than the competing desktop CPU’s. And it was very successful in the desktop market, have you ever heard of a company called Acorn. Ok so the time of the Acorn made RISC OS machines had come to an end. That does not meen that the push on computing power for the ARM had to slow. ARM always had the advantage that Acorn took there time with the designs before releasing a new ARM, and it tended to be far enough ahead of the competition that the next one came in time to keep them going. That is until I would say around 1993/4. The time that Intel CPU’s were coming out way to fast is later in the game. Way way later in the game in fact, that trend realy took off around 1995 (yes I remember it). I hope you are correct about the 32 bit ARM being here to stay. That would be very nice. |
|
Scale. Acorn were (relatively) a big company, and operating at that scale isn’t cheap: you’ve got to sell more products to make enough money to cover your overheads. Everyone since has been one or two man bands, and even then Castle found that the development costs for Iyonix 2, or respinning Iyonix 1 to comply with changes to standards requirements, weren’t affordable. How many of the current RISC OS hardware companies are almost hobbies alongside a “proper” job which pays the bills? How many modern RISC OS systems contain dev boards designed elsewhere? |
|
With time, brain tend to rewrite history :)
Only with StrongARM… Mid 1996. Pentium 200 (376 MIPS) was available since one year. Before it was ARM 7, supersedes by Pentium 75, 90 and 120 (I did have a Pentium 75 at the RPC 700 launch. Faster models were already available).
First ARM was available in first computer in 1987. 6 year after first MIPS, 1 year after HP-PA, same year as SPARC. PowerPC does not count… it’s based on a design from 1974. And it’s not really RISC anyway.
All mainframe and workstation processors need one too before. And of course 486 DX4 and Pentium. Even some ARM 3 at ‘high’ speed, but shhhhhhh :)
In desktop, yes… if we can consider that the price of the Archimedes was ‘desktop like’. PC was pricey until the Pentium, then price of my computers tend to melt… Not price of RISC PC. |
|
Well that is news to me. Where I was the Pentium did not come on the local market till 1997. So I had that one wrong , not that I would want to use an x86 for personal usage (If you have ever written code to set up the GDT, LDT, IVT, switch to PMODE, setup a v86 enviroment so you can call the ROM BIOS, then you would understand).
Yes MIPS is older than ARM, hence the wording of playing with MIPS more. HP-PA does not count as it was never successful in the Personal Computer market. I would argue the clasification of the SPARC as RISC, there were sudo-microcoded ops on the SPARC if I remember correctly, which I would consider non-RISC (yes it has been a very long time since I played with a SPARC).
Mainframe and Mini processors are not exactly end user targeted are they. I never had a heatsink on any ARM 3, nor did I on my AMD 486DX4 133MHz. And the DEC Alpha came about just before the Pentium if I am not mistaken. Point here is that we still today do not need heatsinks on the ARM based desktop computers still being made.
Well it was comparable in price to PC’s (when there were real IBM PC, PC/AT, PC/XT, PS/2 Compatibles ), as well as to Macintosh’s, NeXT systems, BeBox, Amiga, Atari ST/TT/Falcon, etc. So yes I would say it was a Personal Computer price point.
And talking about the mind rewriting history: |
|
Yep, I bought my Pentium 75 a few months before my RPC 600.
RISC PC was not exactly a low cost computer. Almost, as pricey as a MIPS workstation in France.
Yep, 1992. Pentium 1993.
Not in France. Archimedes was the price of a very high end Amiga or Atari computer. RISC PC was really pricey compared to PC (but OK for a Mac user). In 1995, my PC was 10.0000 F, my RPC 600 20.000 F (half the price of a low cost car). Amiga 1200 was around 2400 F, as Atari Falcon (not 100% sure). First PC were available for less then 5000 F.
That’s why I talked of PC. IBM did propose PC XT and AT (286). I remember that clones already exists, with rating for compatibility. PC is non proprietary system. That’s why enterprises did love it. Anyway, we are not here to discuss if ARM is cool or not. We all know it, since we’re here :) |
|
? Maybe you missed the subtlety of this: I say “primarily” because anybody who has ever dealt with the DOS “memory models” and the associated therapy will know that it was obfuscated to the point of almost being an April Fools that somebody managed to get taken seriously.
My first, a Beeb, was 6502. Then there was, I think, a Dragon 32 that was sort-of 6502ish. But it was naff so I gave it to a friend. Sadly, the days before YouTube and omnipresent video cameras. Oh well.
No idea about the MC processors, nor can I be bothered to look ’em up. If you mean the 68000 series (as in the Amiga and Mac Classic), then it started life as a mostly 16 bit processor with 32 bit registers. It got better, though. :-) The x86 family? Actually, yes. While those you listed can all run in 32 bit mode, the boot up in a brain-damaged 16 bit configuration. My mother’s 486DX2 spent most of its life running WordPerfect 5.1 under DOS, or Word (2?) under Windows 3.11. Pretty much all 16 bit operation.
Really? The ARM7 (1994) managed to clock up 40 MIPS at 45MHz (not in a RiscPC, mind you…). That same year, the 486DX4 would manage 70 MIPS at 100MHz while the Pentium would deliver 188 MIPS at 100MHz, and the 68060 of which you speak would offer 110 MIPS at 75MHz. ARM’s claim to fame was twofold. The first, its legendary efficiency. That’s why it is everywhere. If it needs to process, needs to be small, and it needs to run off batteries, ARM is your processor of choice. Problem is, Windows 3.11 came along. This brought a lot of sanity to the PC market. No longer did every single piece of software need to have its own graphics and printer drivers (and, trust me, god help you if you had a Hercules display and a Kyocera printer). Now stuff would be drawn to a virtual device, the infamous GDI. Windows would translate this to the video adaptor and/or the printer. And since manufacturers only needed now write drivers for Windows, it was done. RISC OS and ARM still held their own, even in the face of faster PCs. Because, you see, MIPS is not really much of an indication of anything until you get to big disparities. Why? Let’s say you use all the registers on a 6502 in your interrupt handler. To get out and go back to what was running at the time, you’d do what?
On the ARM2, it’d probably be a version of my favourite instruction:
One processor takes six instructions, the other takes one. So assuming equal MIPS, one will still be much faster than the other. This is why the Archimedes held its head high compared to Windows 3.x in the 386 era. It struggled in the 486 era, not so much because the processors were better, they were just faster. Then along came Windows 95 which sort of mostly ran in a 32 bit mode, ditched the old co-operative multitasking, and…well…it was game over. The original Pentium entered the market in 1993 clocking at 60MHz (~100MIPS). Nearly twice as fast in clock speed and over three times faster in MIPS than the then-current ARM6 (~30MHz, 28MIPS). Don’t get me wrong – the American way of dealing with obstacles is “more dakka”. While there were quite a number of architectural improvements to the Pentium over the 486, the one that was a crowd pleaser was to just make the thing run faster. And faster. And faster…
The heritage of the PowerPC can be traced back to 1974, what might possibly be the first RISC processor made – the IBM 801. The Wiki entry for Acorn Archimedes is interesting: ARM’s RISC design – 32-bit CPU (26-bit addressing), running at 8 MHz, was stated as running at 4.5+ MIPS,2 which provided a significant upgrade from 8-bit home computers, such as Acorn’s previous ones. Claims of being the fastest micro in the world and running at 18 MIPS were also made during tests.3 So there’s the “how fast we can push it” versus “how fast you actually get”. Yup, it probably could run at 18 MIPS. The end-user would see a quarter of that.
Ah. Early heatsinks. Shall we talk about the Beeb’s Video ULA? Okay, it isn’t a processor, but if stuff makes heat then getting rid of the heat would be a good idea.
Absolutely. The x86 instruction set is horrible.
I think a cute example here are the “MMX” instructions on the Pentium. If you delve deep into your video player on Windows (maybe Linux too) you will see you can tweak the optimisations. There are several flavours of MMX. Or SSE. Or 3dNow! Or…?
…somewhat ironically it would never have supported proper virtual memory until the MMU variants. I don’t remember the specifics, just that something in the MEMC was back to front to what was normal for VM systems. Not to mention a software “dirty bit”. Not to mention the issues with R14_SVC.
Very successful in the desktop market in the later eighties in the United Kingdom (also parts of the Commonwealth). Did they ever really figure at all in the US? I think that was pretty much ruled by Apple with AppleII → Mac Classic being roughly equivalent to our Beeb → Archimedes. I have seen “a few” Archimedes in schools in the early ‘90s. I’ve seen a shedload of crappy RM Nimbus machines.
Acorn only designed three processors: the ARM1, the ARM2, and the ARM3. The ARM250 is a sort of hotch-podge attempt at an SoC to push costs down.
There are loads of 32 bit systems that are happily working in a 32 bit world where 64 bit is… not really necessary. Think of set top boxes – the video decoding will be custom hardware, with the ARM overseeing things. It won’t need 64 bit. Hell, it could probably run just as well with 16 bit Thumb!
Wow! Where’s that then?
Luckily by the time I needed to call the ROM BIOS, the Borland runtime took care of the nasty bits. I still have nightmares about those memory models though, and mucking around with __far and __huge prefixes to variables. It seemed all so horrible compared to Norcroft C on RISC OS.
I heatsinked my Pentium75 when I fiddled with some links to push it to 90MHz.
A heatsink is recommended on an overclocked Pi; though I’m not entirely sure why – isn’t that thing on top just the RAM? Might a small fan not be better at shifting heat?
??? The AT specification followed the XT, so it is basically a 286-era design. Given that things were falling out of IBM’s control, they tried to make a sort-of-compatible but otherwise proprietary system called the PS/2. It wasn’t bad in terms of specifications, however the world did not want Big Blue going proprietary, so the PC market just ripped off all the good stuff (VGA, SIMMs, 3.5" floppies, built-in decent UART, PS/2 keyboard and mouse…) and just made this stuff become standard in “clone” PCs. That was probably when IBM realised that it lost. People wanted Windows, and more than that people wanted computers capable of running Windows. This is how we ended up with dozens of manufacturers trying to outdo each other producing stuff that was largely the same. The margins are poor because it is not proprietary. But any company can play the game and make a beige box…because it’s not proprietary.
ARM is cool. It even runs cool. ;-) It’s nice to program. Hmmm… Time to go warm up some gyōza. Miam! |
|
A work colleague used to work in education IT, his opinion is that the words/phrases “crappy” and “Research Machines” are essentially synonyms. I’ve only ever seen two or three, but those were enough to leave a lasting impression and I agree with him. Bad build “supported” by what with the most charitable description are described as “clueless numpties” (I don’t think he was impressed.) |
|
I spent a year as Director of IT at Highgate School (spit) in ‘91-92 and equipped it with all lovely A3000s. Couldn’t stand the place and quit at the end of the school year. My successor chucked them all out and replaced them with Nimbuses. Hey ho. Then five-and-a-half years running the pre-press operation for The Journal of Physiology and The Journal of Experimental Physiology. Used A5000s and A540s, then Risc PCs, running Impression Publisher and printing camera-ready copy at 161% of final size on Calligraph laser printers. Saved the society millions (literally). When I left, my successor junked the Acorns and installed PCs (I bought all the old equipment for a song… :-) …all gone now though). The PC operation lasted three years before they jacked it in and got the publishers to do the pre-press. Hey ho. |