GraphicsV questions
Pages: 1 2
Jon Abbott (1421) 2599 posts |
I’m attempting to writing a full GraphicsV driver which adds support to the current driver for legacy modes, but am running into a few issues:
A few questions:
|
Colin Ferris (399) 1748 posts |
[driver for legacy modes] Does this include large 4 colour modes :-) |
Jon Abbott (1421) 2599 posts |
I guess this is being caused by the call to GraphicsV 16 – as a workaround, I’ve suppressed the prompt with Wimp_CommandWindow -1
ADFFS has provided legacy modes for years, but relies on the GPU upscaling so isn’t much use if EDID is enabled, or the monitor can’t handle the resolution the GPU is sending it. As my target is the Pi, that’s not really an issue but means its not much use for a generic legacy mode GraphicsV driver. I’m testing using a full GraphicsV driver as I’ve noticed the Wimp is not displaying some dialogue boxes – they seem to be drawn off screen. Switching to a full driver has made no difference to this particular issue however, so GraphicsV isn’t the issue. |
Jeffrey Lee (213) 6046 posts |
Currently when you switch driver, the OS forces a mode change, to make sure that everything is set up correctly. The code could probably do with being tweaked a bit so that it’ll behave nicer when in the Wimp.
VSync should start on the first mode change request. If multiple drivers are present, they should all forward the VSync events to the OS; as long as the correct driver number is provided when the driver calls GraphicsV 1, the OS will be able to filter out all but the VSync events for the current driver. Once a driver starts generating VSync, there’s no explicit way for the OS to stop it being generated (we don’t have a “shut down harware” call yet). So generally it will continue to be generated until the driver deregisters itself. The exception being, that drivers are allowed to stop generating VSync in response to GraphicsV 4 calls.
MOV PC,R14 would make the most sense, so that other GraphicsV claimants (including any default claimant in the OS) have the option to implement the feature.
For the OS to use DA 2, you’ll want to leave GraphicsV 9 unclaimed, and leave bit 3 of the feature flags clear. GraphicsV 2 must be implemented & claimed, otherwise you’ll have a driver which is incapable of changing screen mode (although I suspect the OS just blindly calls GV 2 and doesn’t check if it was claimed or not) GraphicsV drivers shouldn’t need to interact with DA 2 directly; the OS will fully manage it itself.
Currently there’s no way for a driver to determine how many screen banks the OS/program is requesting.
It’ll call GraphicsV 9 after every call to GraphicsV 2
No.
The main vector entry point needs to be fully re-entrant, to allow it to cope with e.g. multiple drivers being installed. Beyond that, only a handful of reason codes need to be re-entrant. The reason code listing and footnote summarises the requirements. |
Jon Abbott (1421) 2599 posts |
I can’t seem to get it to manage the DA2 size. When does it check features? The GPU driver will require bit 3 set and I only clear it once Service_PreModeChange has told my driver to take over. Ideally I’d like to leave the existing GraphicsV driver running and only switch to mine in Service_PreModeChange when it’s a mode the GPU doesn’t support, this seems to stiff the machine though, probably because switching GraphicsV drivers also changes mode? |
Jeffrey Lee (213) 6046 posts |
Yeah, switching driver while in the middle of a mode change isn’t supported (and I doubt we’d ever support it). Hooking logic into Service_PreModeChange is probably “wrong” as well. It isn’t explicitly stated anywhere, but the way I’ve been treating GraphicsV is that individual GraphicsV drivers should be agnostic of what’s controlling them. Most of the time it’ll be the kernel’s VDU driver that controls a GraphicsV driver, but if you had multiple drivers on a system then it could just as easily be a standalone app which is controlling one of the drivers, or another GraphicsV driver (e.g. for emulating low BPP modes). At the moment the kernel doesn’t really support switching between DA 2 and GraphicsV 9 on a per-mode basis. But with a bit of extra logic in your mode change code I think you can make it work. So the way I’d approach writing the driver would be as follows:
It might be worth thinking of this as two separate drivers – a legacy mode emulation driver (whose job is to only emulate legacy modes), and a “switcher” driver which decides whether the legacy mode driver or the GPU driver should be used for any given mode (by looking at the mode vet results for those two drivers). Apart from maybe some nasty logic for switching to and from DA 2, the switcher driver shouldn’t need to know anything special about the two drivers that it’s switching between. ModeChangeSub is the kernel routine that handles performing a mode change. The logic is a bit convoluted, but the overall flow is as follows:
So for the case where you’re switching from the GPU driver to your emulation, the process will start with the ExternalFramestore flag being set, and the cached feature flags showing that a variable-size external framestore is in use. This means the DA 2 resize in step 3 will be skipped (which is why you’ll need to do it yourself in GV 2), and GV 2 will be called in step 5 (ready for the OS to go “oh, we’re using DA 2” in step 6). Once your driver is in the “active” state, the cached flags will still say that the variable framestore is in use, so it’ll skip the 2nd GV 2 call in step 7. For the case where you’re switching from “active” mode back to “pass-through”, the ExternalFramestore flag will be clear, so step 3 will check/resize DA 2. However due to the “variable framestore” flag still being set, step 5 will still be the one to call GV 2, allowing your driver to switch back to “pass-through” mode ready for the OS to call GV 9 in step 6 and switch over to using the GPU’s framestore. |
Jon Abbott (1421) 2599 posts |
That’s exactly how I programmed it. GraphicsV is going to need some extensions to do this properly though, as we really want the OS to be managing DA2. There also needs to be a flag to allow seamless GraphicsV driver changes for specific modes and a means to communicate how many screen buffers are required. For the time being I’ve botched it by checking bit 7 of the mode number in Service_PreModeChange. There may be a bug in the mode change process though as OS_ReadVduVariables 148 returns the top of DA2 (34000000) if you change from the desktop into a mode that comes via Service_ModeExtension. If you change into a legacy mode first, then the mode via Service_ModeExtension, it returns the correct value. Issuing OS_Byte 112,1 whilst in the mode also corrects OS_ReadVduVariables 148. |
Jeffrey Lee (213) 6046 posts |
Finishing integrating GraphicsV 19 is probably the best way of dealing with this; it was designed to allow drivers to dictate their memory requirements, but currently it’s only being used by ScreenModes to cope with the row padding requirements of the Pi. A quick-and-dirty integration where the kernel calls GraphicsV 19 during the mode change process should allow it to work out whether the new mode will be using DA 2 or GraphicsV 9 and adapt itself accordingly.
I don’t think it’s the approach that I would take, but as long as it’s done properly I guess it won’t cause any problems. Ideally the information about which driver to use would be returned as part of a mode vet call (or similar), so that the kernel can switch to the driver at the appropriate time during the mode change process. (i.e. we don’t want programs to surprise the kernel by calling OS_ScreenMode 11 at a random time during the mode change process). However I’m not sure what that “mode vet call (or similar)” should be.
I’ve done (part of) the user-facing aspect of that by adding the “min screen banks” mode variable (which is currently only understood by the hardware overlay system). Finishing the job by plumbing that into the VDU driver (and adding a “min screen banks” control list item) should be pretty straightforward. |
Jon Abbott (1421) 2599 posts |
I’ve not implemented GraphicsV 19 yet, there’s probably not much point if it’s not being used yet as I won’t be able to test it works correctly. It does look like it would resolve the DA2 issue if implemented in the mode change process, as you suggest. As for the dynamic driver switch on a per mode basis, we could live with the way I’m currently doing it without extending GraphicsV, provided the OS is rechecking the framebuffer at the relevant points and OS_ReadVduVariables returns the correct values. With regard to the number of banks the driver should allocate, how will your mode variable be communicated to GraphicsV? As a parameter for GraphicsV 2, or a VDU variable check? And does this variable adjust for shadowed modes? ie MODE 13+128 |
Jeffrey Lee (213) 6046 posts |
ScreenModes uses it, and that’s what does 99% of mode vetting. The Pi adds padding to the framebuffer rows so that they’re 16 pixels wide and 32 bytes wide, so you’ll know it works if any MDF/EDID mode which doesn’t satisfy both of those constraints works correctly (e.g. Pi-top 1366×768 in any colour depth, or 720-width modes in 8bpp, or 360-width modes in any colour depth, were some cases that I checked). If it isn’t working, then either the mode will be rejected or it’ll appear garbled (I think the old logic used to reject any mode that wasn’t a multiple of 32 bytes, but wasn’t checking the 16 pixel limitation – so 360-wide modes in 32bpp were garbled) The only significant place which doesn’t use GraphicsV 19 is the kernel, which means that mode 47 (360×480) won’t work if you’re using an explicit monitor type (and don’t have an MDF loaded).
I’ll add a new VIDC control list item.
Yes, I’ll make sure it gets set to a minimum of 2 for shadowed modes (and teletext, which uses two banks to handle flashing text) |
Jon Abbott (1421) 2599 posts |
I’ve added GraphicsV 19 which returns “Mode will use system framestore”, but the OS still refuses to manage DA2. My implementation’s SOE is as follows, based on your OS SOE above:
Somewhere in there, the following also occurs:
|
Jeffrey Lee (213) 6046 posts |
I think we’re talking at cross-purposes a bit. The call is being used by ScreenModes (so a GraphicsV 19 implementation can reasonably be tested), but ScreenModes doesn’t use the bit of the call which is most important to you (so you won’t be able to test that GraphicsV 19-controlled framebuffer switching works correctly). Compared to GraphicsV 7, the only reason ScreenModes uses GraphicsV 19 is so that it can get the correct ExtraBytes value to place in the control list (which the kernel+driver will then use to work out the memory requirements). But ScreenModes can’t influence where the kernel decides to place the framebuffer, so you’ll have to wait until I’ve had a chance to add GraphicsV 19 calls to the kernel for the framebuffer information to be used (hopefully sometime in the next week or two, since I’m nearing the end of my current task). |
Jon Abbott (1421) 2599 posts |
I wasn’t sure from your description where the relevant code was in the OS, so gave it a try anyway.
While you’re there would it be possible to add something to temporarily disable gamma (environment variable?) At least until we figure out what the cause of the blanking is. |
Jeffrey Lee (213) 6046 posts |
I started having a look at this last night. Although it would be trivial to handle framebuffer switching by dropping a GraphicsV 19 call into the bit which configures the framebuffer, a quick patch like that will only serve to make the mode change procedure even messier than it already is. So I’m thinking now would be a good time to do some refactoring/tidying as initially proposed here. It’ll take a bit longer, but it’ll deliver some much-needed benefits now and in the future.
Possibly you’ve already spotted it, but this feature went in a week or two ago with the other task I was finishing off (GraphicsV overlays for BCMVideo) |
Jon Abbott (1421) 2599 posts |
Sounds sensible, add in a specific palette entry type for Gamma while you’re there, to resolve the phantom palette entries when emulating low bpp modes.
I did, thanks for adding that, I was stuck on 5.23 until that went in. |
Jeffrey Lee (213) 6046 posts |
Bug of the day: Apart from mode 0, the default ECF patterns for non-square pixel 1bpp modes are “wrong”. https://www.riscosopen.org/viewer/view/castle/RiscOS/Sources/Kernel/s/vdu/vduplot?rev=4.4#l711 Mode 0 is a rectangular pixel mode (XEig 1, YEig 2). The default ECF patterns double up each pixel horizontally, so that the stripes in pattern 3 come out at 45 degrees, and patterns 0 & 2 form a regular grid. Mode 4 is a square pixel mode (XEig 2, YEig 2). The default ECF patterns don’t double up the pixels, so that the stripes in pattern 3 still comes out at 45 degrees, and patterns 0 & 2 form a regular grid. The mode 4 pattern is used for all other 1bpp modes. For square pixel modes this will be fine, but for rectangular ones (e.g. modes 33, 37, 41, 44) it’ll come out wrong because the ECF pattern isn’t being adjusted to take into account the new pixel aspect ratio. For other colour depths the pixel aspect ratio isn’t so important, since the default ECF patterns are either simple checkerboard patterns (2bpp, 4bpp), or horizontal stripes (8bpp, 16bpp), or solid colours (32bpp) (Discovered while trying to determine the logic behind the ECFIndex values in the built-in mode definitions, since ECFIndex (and PalIndex) aren’t mode variables and so can’t be passed to the VDU driver if the built-in modes were changed to be looked up via Service_ModeExtension) |
Jeffrey Lee (213) 6046 posts |
Scratch that – 2bpp modes have grid patterns as well. |
Jon Abbott (1421) 2599 posts |
GraphicsV 2 (Set Mode) is listed as a foreground call only, which causes me some issues. In my original GraphicsV intercept code, I changed the resolution in the blitter when its called during GraphicsV 1 (Vsync), which seemed to work. In the full VIDC20 GraphicsV driver that I’ve now coded however, it stiffs the machine if I call the GPU’s GraphicsV 2 during GraphicsV 1. Oddly, if I change the resolution when I see GraphicsV 1 for the GPU driver, instead of my own, it doesn’t stiff. This however creates another issue as the VIDC20 registers are then out of sync – the screen geometry on VIDC/VIDC20 is fixed at a specific time during flyback, which games such as James Pond 2 and GBA exploit and why they don’t render correctly under emulators. So my question is, what is restricting GraphicsV 2 to foreground only? I need to find a workaround for this restriction, so the GPU resolution can be set during GraphicsV 1. |
Jeffrey Lee (213) 6046 posts |
The reason the wiki lists it as foreground only is because there’s a lot of stuff which drivers have to do which might be difficult to support from IRQ. E.g. memory allocation/memory map changes. Or if the driver has been re-entered it might be difficult for it to update its state without breaking the foreground code. Plus historically, the OS hasn’t supported performing mode changes from the background, so apart from specialist things like ADFFS there’s no need to support making changes from the background.
It’s probably where it makes some calls to VCHIQ – VCHIQ mostly runs from RTSupport routines, so if you’re calling it from an interrupt handler then they’ll be blocked and the driver will fail in some way. If you’re able to call GraphicsV 2 from an RTSupport routine, then I think you’ll have a greater chance of success – although there are some odd issues with thread priority that I haven’t resolved yet so you might have to try a couple of priority levels until you find the right one that works. |
Jon Abbott (1421) 2599 posts |
My VSync generator runs under RTSupport and did originally call the blitter, but also resulted in an instant stiff if it calls GraphicsV 2.
Switching the blitter back to be called from RTSupport and going to priority 160 (VideoPaint) fixes the locks, but I can’t go over 64 (AudioDecode) as it completely knackers sound, which must be a higher priority than the blitter. |
Jeffrey Lee (213) 6046 posts |
try a couple of priority levels until you find the right one that works Odd that you had to increase the priority to get things to work, since my notes seem to suggest that decreasing it is the key to success. I’m assuming the AudioDecode routine at 64 is one of yours? Could it be raised to e.g. 161? The OS uses the following:
|
Jon Abbott (1421) 2599 posts |
Not mine, it’s something in the OS. My code takes over the ChannelHandler, but with the blitter set above 64 static noise appears in the sound. I’ve never investigated where the static is coming from, but presumed it was something SoC (Pi3) related as the OS would surely either silence a missed fill, or repeat an earlier buffer. My guess would be the SoC is overrunning the buffer. Looking at your list, BCMSound might be the culprit but I’m sure I’ve tried increasing the prioriy, with 64 being the highest it can go before static starts appearing. EDIT: The cut off point that GraphicsV 2 breaks is 128, so VCHIQ must be causing the lock. |
Jon Abbott (1421) 2599 posts |
It’s possibly the same issue that’s affecting GraphicsV 2, although it’s odd there’s no lock if its the GPU’s VSync that calls GraphicsV 2. How is the GPU VSync being generated on the Pi? Is it software or hardware generated? |
Jeffrey Lee (213) 6046 posts |
VSync events are generated by interrupt handlers. With fake_vsync_isr=1, it’ll be generated by BCMVideo every time the GPU SMI interrupt fires. With fake_vsync_isr=0, the kernel will trigger it from the timer 0 IRQ handler (on every other tick, generating a fake 50Hz VSync). Checking the source, I think the kernel just directly invokes the VDU driver’s vsync handler – i.e. it doesn’t go via GraphicsV 1. |
Jon Abbott (1421) 2599 posts |
The lock/priority issue is going to be in VCHIQ’s RTSupport code then, that seems to be the common factor. Looking at the VCHIQ C source is does a lot of enabling/disabling IRQ around RT_Yield which can’t be good practice? Should it not be using a more conventional Mutex method that uses LDREX/STREX? |
Pages: 1 2