Hello,I made a proof of concept for the data aware drawing thing I mentioned in a previous email.
Here's the proof of concept, and explanation:
I used the alien image in the pygamezero tutorial as the example. To show that this can be applied to pygamezero. But it can be applied to any engine that draws images really.
I *think* that alien is public domain or Creative Commons. There's a whole asset pack. It doesn't come from Pygame Zero.
There's another "jit" I wrote a long time ago for my first Pygame game here: https://bitbucket.org/lordmauve/metalwork/src/b2b916858e8e8c63222ae609bcd836ab5b55f49b/robots/graphics/scenegraph.py?at=default&fileviewer=file-view-default
It calculates dirty regions by comparing the blits that occur between successive frames.
I guess this could be a Surface style class (OptimisingSceneGraph) which does all this scene graph change tracking business.
Nice one.That seems like something that will work quite with with pgzero.
Another related technique this brings to mind... having command queues with events, that cached the screen at certain points (like key frames). This way it can go backwards in time and not have it take minutes to redraw everything from the start. Not really sure that's at all useful for this. It's mainly used for animations, and content creation apps.
If you saved all this drawing meta data in development, then it would be quite a speed up for playing scenes for the first time too. Of course, if they are somewhat static animations.
On Fri, Mar 31, 2017 at 11:10 PM, Daniel Pope <[hidden email]> wrote:
One weekend ago in a land far far away, I attempted to find good algorithms for segmenting images...One part that will be useful is the "minimal rectangle problem" type algorithms.
Which is trying to find the largest rectangle that fits inside an area. This combined with pygame.transform.threshold and pygame.mask, should be able to be used for finding areas in the image. Especially for the "big image with alpha transparency which is mostly on the sides".
I attempted an implementation in python from those articles. After renaming single letter variable names, and such, it's sort of readable. (it's funny how python can be more readable than pseudo code). However, unfortunately it's not exactly correct in some cases. However, it can work well enough to be useful.
pygame.mask already has some functionality which could be used. Including finding connected areas, and bounding rectangles for those areas. So I'll use these as a first pass, and then use this other algorithm to find the largest rectangle that fits entirely inside. The benefit of working with masks are that are 1 bit per pixel, so you can have a lot less data. Less data means things run quickly.
Step 1 however is to find out what type of image this is. If it's a photo, then this won't be useful. If it's a png, then it could contain transparency. This is why it will help for the algorithm to not just work from surfaces, but from the input file (jpeg == do not use this optimization). Jpeg, and photo images can also be sped up (4:2:2 images are only 8it after all), but that's another topic.
---- Plain colour regions.
The next type of image I looked at is "image with lots of plain colour sections". This one can be sped up by using Surface.fill() instead of blit. We avoid the memory reading of the source image, and additionally can use a hardware accelerated fill. This is (unsurprisingly) common with game graphics, and even with text. Especially modern text with lots of spacing between lines, and spacing beside the text.
First we make a histogram of colours. If there are single colours used a lot, then make some masks for each of those colors(with pygame.mask.from_threshold() again). Find connected components in the mask (with pygame.mask.Mask.connected_components), and use the "maximal rectangle" algorithm above to find areas which are completely that colour. Disregard areas which are small.
See two example screen shots of games where there are lots of plain colour sections.
Here is the easiest way that I think will work to split up the images. Maybe there's a better way?
Find the largest area of the most common color in the image. Then split it up into five parts. The plain color area, and then the other areas. The other areas are one to the left, one to the right, and one above and below.
As shown in the split up image here: http://renesd.blogspot.com/2017/03/data-aware-jitblit-drawing-125-to-145.html
Prefer wide rectangles over high ones. The reason is that image memory works in rows. So it's faster to draw wide images, than high ones. This is why the top and bottom splits are all the way to the edges of the image.
Then on the other four parts, run the algorithm again. Get the histogram, find the biggest most common color area, then split that again into five parts. The algorithm stops when the sub divisions are too small, or there are no common plain colour areas.
A heuristic will need to be found for when to stop. If the area is only 100 pixels for example, then it might not be worth splitting. I'll have to find this by benchmarking on different images and platforms. The work on Surface.blits, which blits many surfaces in C, instead of python will help with drawing many smaller Surfaces that all this splitting produces.
On Sat, Apr 1, 2017 at 8:22 AM, René Dudfield <[hidden email]> wrote:
|Free forum by Nabble||Edit this page|