Mode change procedure
Jeffrey Lee (213) 6048 posts |
I’ve produced a series of flow charts showing the different mode selection processes that occur when changing screen mode: http://www.phlamethrower.co.uk/misc2/ModeChangeSub.png (Note: big image!) Start at ModeChangeSub and continue from there. As you can see, the current process is pretty convoluted, and not entirely bug-free. I’m posting this here for two reasons: To get feedback on the diagram itselfI’m thinking it’s something that might be worth adding to the wiki, e.g. in the technical notes section. Obviously there are a few places where text is clipping into node edges, etc., but are there any other improvements to the diagram that people feel would be necessary? So far my primary concern has been to try and make it so that you can just about fit all of it on screen at once if you have a 1080p or above monitor. Actually, I can see a few bits where some nodes need expanding (e.g. “driver happy” checks need to show the GraphicsV calls involved). To get feedback on the mode change procedureThere are a few situations which the current code doesn’t deal with very well (or at all). So it would be good if we could work out how to improve the current logic, or invent a whole new set of logic to replace it.
Other problems highlighted in the diagram:
|
Jon Abbott (1421) 2652 posts |
Flow charts look good to me, I see what you mean about the indefinite loops. Some of the choices seem odd as well, why does FindOKMode fail to MODE 0, next to nothing supports it these days? I can see another GraphicsV extension here, to provide a minimum MODE definition for fallback |
Steve Pampling (1551) 8182 posts |
Perfect demonstration of why things need updating – which I think was part of Jeffreys intention. |
Jon Abbott (1421) 2652 posts |
I’d shift a lot of the decision making off to GraphicsV and simplify the OS layer, the trouble you have with the current model is that the OS doesn’t know about GPU or Monitor limitations. GraphicsV already vets the MODE and knows the GPU limitations, so why not let it modify the VIDC3 list as designed. The alternative is extend GraphicsV to pass GPU limitations to the OS and make your flow diagram even more complicated. My suggestion: ModeChangeSub 1. Issue Service_PreModeChange (exiting if claimed) Service_ModeTranslation “call failed → Treat as 1bpp” should exit with “unknown MODE” if the OS_ReadScreenModeVariable fails. And what’s the relevance of MonitorType 7, MODE translation should work irrespective of the MonitorType FindSubstitute / FindOKMode ditch and let GraphicsV 7 deal with them The rest look okay as is. I’m not sure what PushModeInfo does though. |
Jeffrey Lee (213) 6048 posts |
The OS does know about monitor limits (or at least it should do shortly) – ScreenModes can (will) extract them from the EDID. It would be nice to keep the knowledge about monitor limits within ScreenModes rather than shifting it into GraphicsV (as you seem to be suggesting?). Otherwise the drivers will become a lot more complicated and there’ll probably be lots of discrepancies in how they enforce the monitor limits, adjust mode timings to fit the limits, etc. But at the same time I can see that it would be easier to have all the knowledge of display limits and driver limits in the same place, so that the code can make informed decisions about what parameters to change instead of there being a constant back-and-forth between ScreenModes & GraphicsV until a happy medium is reached.
Yes, some kind of feedback mechanism is definitely required.
The diagram doesn’t make it very clear, but if you look at the source then you’ll see that ScreenModes only implements that service call in order to try and save the user from the poor choices the kernel will end up making. The monitor type 7 check is there because if you had (e.g.) monitor type 0 and ScreenModes was to suggest that mode 44 be used (NTSC 640×200) then things would go wrong because mode 44 is only a builtin mode for monitor types 1, 3, 4 and 8
It’s job is to generate the VIDCList in preparation for sending to the video driver, and the new set of mode variables in preparation for writing to the VDU driver workspace. |
Jon Abbott (1421) 2652 posts |
How about extend GraphicsV so it passes back min/max screen resolution, supported bpp, width/height modulus, pixel rate limitations etc and let ScreenModes make all the decisions? |
Steve Revill (20) 1361 posts |
Nice work, as always! In terms of having this as an image for our documentation, I’d suggest that where a node is later expanded into its own flowchart, you give that node (and the ones in the detail flowchart) a background colour to identify it. Thus, the nesting relationships between the flowcharts are a bit clearer. |
Jeffrey Lee (213) 6048 posts |
GraphicsV limitations can be pretty complex as well. Simple stuff like min/max resolution, width/height modulus, etc. is obviously pretty easy to report (and will cover many of the cases where modes will fail), but there are also some more obscure limitations which only the driver can sensibly deal with (mostly related to combinations of parameters). Once you start adding hardware scaling, rotation, multi-monitor support, etc. into the mix things will get even more complicated. Thinking about it, I don’t think there are actually that many parameters which it would be valid to allow GraphicsV to change. We all know that monitors can display more modes than are listed in their EDID, but the fact is that there’s no guarantee that any arbitrary mode which lies within the monitor’s advertised limits will work. The only modes which are guaranteed to work are those listed in the EDID (assuming the EDID is correct). Other modes might work, but the decision to use them should be left to the user (by crafting an MDF or supplying an adjusted EDID), not to the video driver. So as far as the display timings are concerned, I think the only parameter which drivers should be allowed to modify is the pixel rate, in order to feed back to ScreenModes what the rate is that will actually be used should the mode be programmed to hardware (so that ScreenModes can make a decision as to whether it lies within the tolerances of the timing specification that’s in use). Parameters which don’t influence the display timings are fair game. E.g. pixel format – if a certain format isn’t available due to bandwidth/etc. limitations the driver can feed back the next best format that is available. Or if the buffer needs to be a certain width multiple then the driver can indicate that there’ll be some extra padding bytes inserted on the end of each row. For situations where the user wants a given resolution but the monitor/driver can’t support it natively, the best solution is likely to be display scaling. So in addition to the mode timings (describing the signal which will be sent to the monitor) we’d pass in the requested screen buffer width/height and some kind of parameter to indicate scaling preference (e.g. any aspect ratio to try and stick to). The driver won’t be allowed to change the buffer width/height (apart from adding padding rows/columns), but it can feed back information about whether scaling will be possible, how big any borders will be around the image, etc. ScreenModes can then use that info to work out what to do next, e.g. if the driver can’t support scaling in that configuration then it might look for a mode which is closer to the desired size, or it might activate some inbuilt software scaling layer in the OS. For hardware which can’t support scaling (and doesn’t have a generic overlay system) it might also be possible to allow the border parameters to be changed, as that will theoretically alter the screen size without altering the signal which is sent to the monitor (for VGA, at least – for DVI/HDMI it will generally end up affecting the data enable signal and so may end up confusing the monitor) |
André Timmermans (100) 655 posts |
I think we must go back to the basics: This says to me that when the process should be divided in separate operations/functions: 2) let the driver signal special abilities like hardware scaling 3.A) when the user ask for a given mode: 4) For mode numbers, they can just be converted to a mode on entry of 3) |
Chris Evans (457) 1614 posts |
One of things that has been great with RISC OS is the users abilities to push things past the official specifications and not be mollycoddled. I hope this philosophy is not lost by rigid adherence to what EDID reports. Which we know is often economical with the truth. ROL got a lot of stick for tightening up the RISC PC video out which stopped some people using some resolutions that they had previously used with no ill effects for years. |
Andrew Conroy (370) 740 posts |
Just to add to what Chris said, we have a 17" LCD at CJE with a native resolution of 1280×1024. Surprisingly it will still display 1680×1050 and even has a good go at displaying 1920×1080. It might be outside the monitor’s official specs, but we can still push it harder if we want to. |
Jeffrey Lee (213) 6048 posts |
Yeah, that’s fine. ScreenModes will most likely get more and more strict with how it follows a monitor’s EDID – this is a necessity if we want to make sure bad data doesn’t result in us picking a bad mode. If a user doesn’t like the modes available via EDID then he still has the option of loading an MDF or a modified EDID block, causing the original EDID to be ignored completely – we don’t have any intention of changing that behaviour. |
Dave Higton (1515) 3549 posts |
What RO computers have a video output that will generate an HDMI 1080p signal? Will the Raspberry Pi pre model 2, or the BBxM, do it? Dave |
Jeffrey Lee (213) 6048 posts |
Raspberry Pi 1/2, OMAP4, OMAP5, iMx6 can do 1080p @ 60Hz just fine (well, I guess there will usually be green speckling on OMAP4) BBxM can do 1080p @ 30Hz (possibly only with reduced blanking timings) Iyonix can do 1080p @ 60Hz, but obviously that’s VGA RiscPC might do 1080p if you ask it nicely (I managed to do 1920×1200 at ~56Hz with some heavily tweaked mode timings… 1080p would be easier I guess) |
Chris Johnson (125) 825 posts |
My ARMini manages 1920×1080 at 40Hz.
My PandaRO is running at 1920×1080 @ 60Hz. Can’t say I have noticed any of the green speckles in general usage. |
Jon Abbott (1421) 2652 posts |
How does Service_EnumerateScreenModes fit into all this? I’ve not tested the theory yet, but I suspect that if a restricted list of resolutions comes back from the selected monitor (or possibly EDID?), RISCOS on the Pi won’t support a lot of perfectly valid resolutions. Is the Pi the only RISCOS platform where this is an issue? Or are there other platforms where the video output resolution isn’t affected by RISCOS? I’ve not tested the post RC14 builds, but on RC14, selecting monitor type “Auto” – which fixes this issue – prevents you from then setting the default resolution in Configure. You have to either manually set the resolution after each boot, or add a WimpMode entry into the boot sequence somewhere; its not particularly user friendly. I’m taking a guess that Configure uses the monitor definition for the supported resolution list, so selecting Auto can’t provide a list until EDID support is added? |
Jeffrey Lee (213) 6048 posts |
Service_EnumerateScreenModes is handled by ScreenModes. It iterates through all the entries in the MDF, and for each MDF entry it iterates through all the pixel formats supported by the driver. If the driver says that a given MDF entry & pixel format combination are supported then it will add that mode (MDF entry + pixel format combination) to the list returned by the service call. If you’re using *ReadEDID then replace ‘MDF’ with ‘EDID’ in the above. If you’re not using an MDF or EDID (e.g. you’ve got the system configured to use an Archimedes-era monitor type) then Service_EnumerateScreenModes will do nothing, which is perhaps a bit of a bug (the kernel will be using one of its builtin mode lists, there’s no reason why it can’t provide its own Service_EnumerateScreenModes implementation which looks through that) I’ve not tested the theory yet, but I suspect that if a restricted list of resolutions comes back from the selected monitor (or possibly EDID?), RISCOS on the Pi won’t support a lot of perfectly valid resolutions. The MDF (or EDID if used) defines the modes which RISC OS will select. This is true on all machines, unless you use third-party software like AnyMode which provides its own Service_EnumerateScreenModes & Service_ModeExtension implementations. On most machines, the mode you select will translate directly to the mode timings which are sent to the monitor. The only exceptions to this are the Pi (GPU decides physical mode on startup, and scales the RISC OS mode to fit) and the OMAP3 portables (Pandora, TouchBook – the LCD always stays at its default native resolution and the RISC OS display is just positioned in the top-left of the screen, without any scaling).
I think the “issue” you’re talking about here is support for the legacy numbered screen modes? For the Pi, there’s actually a special hack for this. We wanted the OS to boot into a 1080p mode, but there aren’t any numbered screen modes for that resolution. So rather than add a numbered mode we went down the route of adding a builtin MDF to the ROM which is loaded on startup. As well as defining the 1080p mode, it also defines all the standard legacy numbered modes. But Configure doesn’t understand the fact that there’s a builtin MDF in Resources – it only knows how to find MDFs which are located in the boot sequence (in fact, I don’t think there’s even a way for it to get the filename of the currently loaded MDF – so it has to resort to checking the Monitor configure file to see which MDF should have been loaded). If you select a proper MDF from the list available in Configure then chances are you’ll have selected one with only partial support for numbered modes (in fact I don’t think there are any MDFs which provide support for all modes – I had to cobble together the builtin MDF from several different MDFs). So then you’ll end up with the situation where some of your “perfectly valid” modes are no longer available (but the OS will fully acknowledge this, e.g. OS_CheckModeValid will report that the mode isn’t available). If you select ‘Auto’ monitor type in Configure then that will try enabling the Archimedes-era automatic monitor type detection. Apart from the fact that that feature doesn’t work any more (support has been removed from recent OS versions), I think that option will also remove the LoadModeFile entry from the Monitor configure file – so it will prevent any configured MDF from clobbering the builtin MDF that was loaded during ROM init. I’m not quite sure what state that will leave things in (e.g. whether ScreenModes will honour that MDF or whether it will ignore it because the ‘MDF’ monitor type isn’t selected), but it will certainly get (the majority of) numbered modes working again. As an aside I’m not really sure why Acorn kept the “Auto” option there – probably just one of those things where it would have been more work and risk to remove it than to just keep it there.
Yes. |
Jon Abbott (1421) 2652 posts |
There actually looks like a few issues at play. Firstly as you mention, numbered RISCOS modes aren’t necessarily available on the Pi, the second is that use of a VIDC Type 3 List via Service_ModeExtension is useless on a Pi without a manually added MDF entry. In both cases these should work on the Pi, as it supports it in hardware. I’m not advocating that RISCOS be changed to support the Pi’s GPU, but I do need to find a way around the problem for games to work correctly…hence my question about how Service_EnumerateScreenModes fits into the mode change process. Is it issued before or after PreMode? Could I for example pick up the mode resolution at PreMode or ModeExtension and then return a matching mode when EnumerateScreenModes is called? Diggers highlighted this issue. I got it working on the Pi under MODE 13, but soon discovered that as RISCOS wasn’t aware of the screen dimensions it was using the wrong address for frame buffer 1. To correct this, I added passing back a valid VIDC Type 3 List in the Type 0/1 ModeExtension translation. RISCOS then buffers frames correctly, but will only go into the correct base mode if Auto is selected in Configure. If I’m reading your response correctly, Auto will no longer work so I need to find a way for RISCOS to support any resolution to match the GPU behaviour, otherwise very few games will work.
That explains why all my Pi’s are unreadable during boot, RISCOS is assuming the monitor supports 1080p. Can that resolution be changed, as my Pi’s crash quite frequently during boot but I’m unable to report the crashes as I can’t read the error. |
Jeffrey Lee (213) 6048 posts |
Service_EnumerateScreenModes is generally issued well in advance of the mode change. E.g. the display manager on the icon bar only issues it when the list of available modes changed (when Service_ModeFileLoaded is issued) – it then uses the results to build its own cached list of modes (sorted by resolution, framerate, etc.)
Interesting – I would have thought that claiming Service_ModeExtension (and returning a type 3 VIDC list) would have been enough to be able to get it to work. Maybe it’s somehow related to the way Digger’s mode module works? IIRC it looks for an unused numbered mode and hijacks that – so maybe if the wrong monitor type is selected either the mode module or the kernel gets confused and it’s unable to work out which base mode should be used?
Yeah, I’d expect things to change at some point, but I’m not sure exactly what they’ll be changing to. Eventually I’d expect we’d end up with a proper solution for supporting the legacy modes (either hardware scaling or software emulation), but it will take a while for those plans to come to fruition. We wanted the OS to boot into a 1080p mode You should be able to use *Configure Mode to specify a numbered mode to use instead – that’s how we get the Pico image to boot into mode 7. |
Rick Murray (539) 13863 posts |
Ah, so that is what kept on disabling my custom mode file (with 1280×1024 defined). I “fixed” it by locking the file. Really, shouldn’t “auto” mode prod the EDID to see what the monitor wants, reverting to something sane if that fails? |
Jeffrey Lee (213) 6048 posts |
No, that’s ClrMonitor |
Jon Abbott (1421) 2652 posts |
It actually has the opposite effect and breaks all custom modes coming via Service_ModeExtension. Which of course it would, as for the most part, none of the monitors bundled with RO5 have the legacy screen modes in them. Ideally RO needs to ignore monitor files on the Pi, leaving the GraphicsV driver to ensure the mode is valid, as its aware of the hardware restrictions – but this doesn’t fit the mode change process. It’s nothing to do with the way the modes are implemented in Diggers, it’s just that up to Diggers I was processing type 0/1 by changing to the base mode myself and passing the VIDC parameters through to GraphicsV Set Mode. Sounds like I’m totally buggered, not only is Auto being deprecated, but the mode definition file is potentially only read during the boot sequence. I’m going to have to think of a more devious means to fool RO into allowing valid resolutions. I wonder what happens if I reject the incorrect mode no. at Service_PreMode and hand back the correct mode no? (EDIT: ignore this – I don’t think RO is aware of the legacy modes with a valid MDF) Thanks for the Configure Mode tip, I wasn’t aware that also changed the boot sequence resolution. |
Jon Abbott (1421) 2652 posts |
Looking at the diagram again, I think the issue I’m seeing may be caused by Service_ModeTranslation, as it appears to always force MODE 25, 26, 27, or 28 if an MDF is used – which will always be the case for legacy modes if a monitor is selected in Configure. It doesn’t look like the call is even passed on if this is the case? |
Steffen Huber (91) 1958 posts |
Is the classic MonitorType still used for anything nowadays? |
Jon Abbott (1421) 2652 posts |
Funny you should mention that, as issuing Configure MonitorType 0 before switching to a legacy MODE resolves the problem. Only drawback is, setting it back to Auto looks like it might be switching RISCOS to it’s internal MDF instead of the MDF used in Configure as it goes into a legacy MODE. Incidentally, I’m testing this by issuing MODE 13 with a monitor set in Configure. |