Mapping pages between PMPs
Lee Noar (2750) 16 posts |
Is it possible to map pages from one PMP DA to another? OS_DynamicArea 22 only deals with page indices which seems to indicate that the pages are contained within one PMP. Or is it a case of using OS_DynamicArea 21 first to release the pages from one PMP and then claim them in the other? Will the page contents be preserved? |
Jeffrey Lee (213) 6046 posts |
You can claim a page which is currently in use by another PMP/DA (specify the appropriate physical page number in the OS_DynamicArea 21 block). Functionally it works the same as specifying page numbers in the Dynamic Area PreGrow handler – assuming the current owner hasn’t locked the page, the OS will transfer the page to your DA, and put a replacement page into the original DA (filling it with a copy of the page contents, so that the page move is entirely transparent to the original owner). When the page is moved into your DA, it will contain its contents still, but this is just a side-effect of the implementation rather than a guarantee. So I don’t think it would be wise to rely on that behaviour.
Correct. Currently there’s no way of sharing pages between PMPs/DAs (or having one PMP which is shared with multiple DAs); if you need that kind of functionality then let me know and I can have a think about potential solutions. |
Lee Noar (2750) 16 posts |
Do you mean the hardware implementation or the RISC OS implementation? I’m looking at implementing shared libraries in the same way Linux does. I’m using one PMP to I could store the text and data segments in the same PMP so that mapping is contained within If that’s not possible, I guess I could memcpy the segments, but I think that would impact performance. |
Jeffrey Lee (213) 6046 posts |
RISC OS implementation. E.g. in the future we might decide that we want to zero all freshly-allocated pages for security, or as an optimisation we might invalidate the cache instead of flushing it when mapping out the page from its current location, etc. If necessary we can add extra flags to OS_DynamicArea 21 to allow for explicit move or copy behaviour. But I’m guessing you’re more interested in sharing than copying/moving.
I’m confused – is that two PMPs or three? Are these PMPs shared between all tasks/programs or are some of them per-task? Also, are you going to need to have multiple mappings of pages? (i.e. same page simultaneously mapped to different logical addresses). At the moment PMPs don’t support multiple mapping, and when they do gain support for it, the level of support will vary by CPU architecture (prior to ARMv6, there’s no support for having a cacheable + writeable page multiply-mapped, and even on ARMv6 it can be a bit tricky due to “page colouring”). It might also be worth thinking ahead a bit to how things will work in a multi-core world (admittedly it’ll probably be 6-12 months until we start supporting running application threads on other cores, but it’s worth thinking about it now just to avoid having to rewrite from scratch once the feature does become available) Long-term, I guess sharing of pages between PMPs is something that we will want to support, because that would then allow for (application) shared libraries to live in application space instead of the global DA address space. It’s just going to be a bit of a headache to modify the kernel so that it can deal with everything we need it to.
A word of warning – at the moment most of the OS_DynamicArea reason codes (including 21 & 22) don’t support being used from IRQ handlers (they’ll return an error). So IRQ handlers which call shared libraries are likely to fail. I’m not sure of the exact reasoning behind the restriction – for the PMP operations I just followed the lead of OS_ChangeDynamicArea and the other OS_DynamicArea reason codes. But I think it’s a combination of most of the memory management code not being re-entrant, slow performance if cache flushing is required, and re-entrancy restrictions for some of the service calls. Some day I’m hoping to loosen the restrictions a bit, e.g. so that PMPs can map in pages which they already own, and so that dynamic areas can do simple grows (claiming pages from the free pool), both of which are operations which shouldn’t require cache maintenance. But as with most things on my todo list I have no idea when I’ll get round to doing it! |
Lee Noar (2750) 16 posts |
Any chance that these could be optional?
Sorry, I’ll try to explain better. There are two PMPs, one containing the full ELF image of the loaded libraries (text+data segments) and one containing the data segments that are specific to each task.
I don’t think so, I suspect Linux may do something like that, but I can manage with just moving pages around as required. I don’t really need 2 PMPs, that was just for the convenience of keeping libraries and task data segments separate. I was going to say it helps with fragmentation too, but that’s not such an issue when you can release unused pages back to the system. I’d be happy to just use the one PMP and swap pages about, they don’t need to be shared.
Ouch! So I can’t call OS_DynamicArea from an abort handler? That could be a problem. Can I drop down to SVC mode first and then manipulate the PMP pages? Obviously, this is being done in a module. |
Jeffrey Lee (213) 6046 posts |
Yes – that’s what I meant by the copy/move flags. (copy = current behaviour, but guaranteed to preserve the contents of the page, move = take the page and preserve the contents, but don’t put a replacement page into the source area)
Yeah, I think using just one PMP would be best for now.
Calling it from an abort handler should be fine (assuming the abort didn’t come from somewhere inside OS_DynamicArea – it has a flag it uses to guard against re-entrancy). But if the abort occurs from an IRQ handler then it will get upset (the kernel keeps track of whether any IRQ handlers are active; dropping into SVC mode won’t save you). And of course if the abort occurs from an IRQ handler while the foreground was in the middle of an OS_DynamicArea call then you’re doubly screwed. Another thing to be wary of is code which uses SWIs to examine the memory map – OS_ValidateAddress, OS_Memory 0 (used by DMAManager to map logical → physical), etc. The kernel has special hooks in place to make sure these calls work correctly with lazy task swapping. But if you’re implementing your own lazy page swapping system then I don’t think there’s any way to handle them. So I guess this is a situation where we’d need to extend the OS so that PMP DAs can lazily map memory with the same level of transparency as application space. |