Discussion:
AntiAliasing graphics library roundup for Delphi
(too old to reply)
Eric Grange
2006-11-27 07:40:07 UTC
Permalink
I'm currently in the process of evaluating anti-aliased 2D graphics alternatives
under Delphi for the revamp of an internal library, below are those I found, if
anyone knows one that isn't there, I would be grateful for links :)

Keep in mind my comments below are for antialiased graphics only, with software
rendering (no hardware acceleration):

* GDI super-sampling (draw on a larger bitmap then downsample it):
- low to medium speed, high memory usage
- low to high quality (but usually only "low" is practical)
- easy to maintain

* GDI+ :
- medium speed, some memory leaks to be avoided
- medium to low quality
- easy to maintain (some deployment issues to watch though)

* G32 : (www.g32.org)
- medium to high speed
- medium quality, but feature set somewhat limited
- easy to maintain

* AntiGrain : (www.aggpas.org)
- medium speed
- high quality, arguably the richest feature set
- complex to maintain

* DirectX/OpenGL :
- very slow speed when hardware not available (esp. for AA modes)
- medium quality, built-in 2D feature set limited (text output...)
- driver-specific quirks complicate maintainance

* Avalon/WPF :
- low to medium speed when hardware not available
- medium quality, rich feature set but with a retained mode renderer
- strong deployment issues for the foreseeable future

In conclusion, no definite winner yet IMO, a cross-over between AntiGrain's
features and G32 maintainability would be a godsend, and if it had the potential
of becoming hardware accelerated one day like the last two options, this would
be perfect... WPF is somewhat close, but dependencies and deployment issues are
overwhelming, especially with the restrictions on pre-Vista OSes.
Any other options I would have missed?

Eric
Nils Haeck
2006-11-27 10:08:42 UTC
Permalink
Hi Eric,
Post by Eric Grange
Any other options I would have missed?
Yes, you've missed my new Pyro library :)

I'm currently working on it so there's not much yet on my website, except
these two forum posts:

http://www.simdesign.nl/forum/viewtopic.php?t=648
(contains demo with demo source code)

http://www.simdesign.nl/forum/viewtopic.php?t=316
(overview)

Pyrographics supports:

* Drawing of lines, circles, ellipses, rectangles, rounded rectangles,
polylines, polygons and paths
* Path drawing includes arcs, quadratic beziers and cubic beziers
* Any path or shape can be filled and stroked. Dashed strokes are supported,
strokes can have mitered, beveled or rounded joins, strokes can have square,
butt or round line caps.
* Any fill or stroke can be filled with a solid color, transparent color, or
use a paint server.
* Paint servers available are:
- Linear gradient
- Radial gradient
- Texture (not available yet)
- Pattern (not available yet)
* Text can use any Truetype font, and is drawn natively. Text can be
converted to a path, text can be stroked and filled. Other text specifics:
- Kerning
- Individual character placement
- Word and character spacing
* Images
- Support for Bmp, Jpeg, Png (3rd party), Tga (3rd party)
* Color spaces supported:
- Standard Gray
- Standard RGB
- Standard RGBA
- Standard CMYK
- 8 bits per channel / 16 bits per channel (16bpc not available yet)
* Transforms:
- Affine
- Projective/bilinear
- Linear and radial transforms used in gradients

Plugins available for Pyro, and sold separately:

* EMF import/export (just import for now)
* SVG import/export (just import for now)
- includes animation
* PDF import/export (not available yet)
- version 1.4, all filters, no Type1 yet, supports encryption 40/128 bit

Pyro has its own rendering engine in pure Delphi but can also
"cross-convert" from one format to the other. It's also transparent enough
to add perhaps a hardware accelerated 2D engine later.

I already sold a few versions of Pyro to "early adopters", basically for
tasks involving rendering of millions of items, and it seems to work well
and fast. I can send you a dcu version for evaluation if you want.

Nils
www.simdesign.nl
Eric Grange
2006-11-27 13:38:24 UTC
Permalink
Post by Nils Haeck
Yes, you've missed my new Pyro library :)
Interesting specs Nils :)

I've given a try to the precompiled "ShapeDemo", is it compiled with
optimizations? Or in other words, if I implement the demo cases with
another graphics lib, can I compare framerates for reference?

Eric
Nils Haeck
2006-11-28 02:05:16 UTC
Permalink
Post by Eric Grange
I've given a try to the precompiled "ShapeDemo", is it compiled with
optimizations? Or in other words, if I implement the demo cases with
another graphics lib, can I compare framerates for reference?
In the demo the most demanding item is the 101 ellipses with
semi-transparency. The current blending routine is not yet optimized with
MMX so you'll probably see that Gr32 is faster. But go ahead and try :)

Another thing to note: I am still using non-premultiplied alpha in the demo,
while now I'm changing the rendering model to use pre-multiplied alphas,
this will certainly increase the speed of alphablending.

The generation of polygons (and containing beziers and arcs) is done very
efficiently in Pyro, so e.g. when rendering text as polygons this should be
noticable.

When I compile I usually have optimizations turned on.

Nils
Eric Grange
2006-11-28 09:14:59 UTC
Permalink
Post by Nils Haeck
In the demo the most demanding item is the 101 ellipses with
semi-transparency. The current blending routine is not yet optimized with
MMX so you'll probably see that Gr32 is faster. But go ahead and try :)
Ok :)
Post by Nils Haeck
Another thing to note: I am still using non-premultiplied alpha in the demo,
while now I'm changing the rendering model to use pre-multiplied alphas,
this will certainly increase the speed of alphablending.
I've grown wary of premultiplied alpha blending after fighting a
blending quality issue. I had a case where things wouldn't look as they
did in PaintShop, after investigating if PaintShop wasn't doing any
tricks, if the PNG loading code did its job, it eventually came to be
that Windows' AlphaBlend and its premultiplied alpha were the culprits.
When premultiplying, you actually lose one bit of precision (and 8bits
aren't much to begin with), and on gradients of color and alpha and/or
multiple blending, this can result in banding or opacity/coloring
artefacts that wouldn't have occurred if the blending had happened at
full precision.
Of course, if you work at 16bits per channel or more, that's not an issue :)

I'm not too sure how much pre-multiplying actually gains you in
performance, but it may not be much (since you're often memory-bandwidth
limited when blending a lot of pixels). All I know is that my MMX
alphablending (non-pre-multiplied) is faster than Windows'
pre-multiplied AlphaBlend, but that's not an apples-to-apples comparison.

Eric
Nils Haeck
2006-11-28 13:26:40 UTC
Permalink
Post by Eric Grange
I'm not too sure how much pre-multiplying actually gains you in
performance, but it may not be much (since you're often memory-bandwidth
limited when blending a lot of pixels). All I know is that my MMX
alphablending (non-pre-multiplied) is faster than Windows' pre-multiplied
AlphaBlend, but that's not an apples-to-apples comparison.
It is much faster because you can skip a few "div 255" operations per blend.
Of course this can be implemented in a smart way, but for instance G32 isn't
pixel perfect when doing it with MMX. Especially the cases where a value of
255 ends up as 254 is tricky, because then "white" on the printer becomes
"very light gray", and it will cause ugly black dots once in a while on some
printers.

I'll implement my library such that premul versus non-premul is an option,
but I certainly want to put it in.

Nils
Eric Grange
2006-11-28 16:17:14 UTC
Permalink
Post by Nils Haeck
It is much faster because you can skip a few "div 255" operations per blend.
Unless I'm missing something, if you div by 256, you are fast again, and
you lose only 1/255 of precision, which is more accurate than losing a
whole bit when pre-multiplying, no?

With blendAlpha in the regular 0-255 range, just perform

targetColor * (256-blendAlpha) + blendColor * (blendAlpha+1)

and shift the result 8 bits to the right.
The bias introduced in the blendAlpha factors compensates for the
"rounding down" nature of the integer division.

Haven't tried it, but it could be interesting to test the whole range of
target, blend and alpha values to see how much variation there actually
is between the different methods (comparing against a floating-point
reference f.i.).
Post by Nils Haeck
Especially the cases where a value of 255 ends up as 254 is tricky,
because then "white" on the printer becomes "very light gray", and it
will cause ugly black dots once in a while on some printers.
Though many printers I've had to deal with would end up with very light
gray even with color values of (255,255,255), and you would usually need
some calibration in the printer driver settings to get rid of it.

Eric
Mattias Andersson
2006-11-29 12:29:29 UTC
Permalink
Hi Nils,
Post by Nils Haeck
Post by Eric Grange
I'm not too sure how much pre-multiplying actually gains you in
performance, but it may not be much (since you're often
memory-bandwidth limited when blending a lot of pixels). All I know
is that my MMX alphablending (non-pre-multiplied) is faster than
Windows' pre-multiplied AlphaBlend, but that's not an
apples-to-apples comparison.
It is much faster because you can skip a few "div 255" operations per
blend. Of course this can be implemented in a smart way, but for
instance G32 isn't pixel perfect when doing it with MMX. Especially
the cases where a value of 255 ends up as 254 is tricky, because then
"white" on the printer becomes "very light gray", and it will cause
ugly black dots once in a while on some printers.
I'll implement my library such that premul versus non-premul is an
option, but I certainly want to put it in.
There's an excellent article on this subject by Alvy Ray Smith:

<ftp://ftp.alvyray.com/Acrobat/4_Comp.pdf>

He argues that premultiplied alpha is both more efficient and leads to more
elegant formulas.

I think it would be interesting with some benchmark comparisons between
ordinary and premultiplied blending. Even though there are bandwidth
limitations, I think there may still be some increase in performance.

Also, while premultiplying will improve performance for blending, I think
the real advantage comes when using the merge operation.

Without premultiplying you need to compute this:

a_out = a1 + a2 - a1 * a2
c_out = (c1 * a1 + c2 * (1 - a1) * a2) / a_out

and with premultiplying this is simplified to:

a_out = a1 + a2 * (1 - a1)
c_out = c1 + c2 * (1 - a1)

Btw. deriving a formula for layer merging is an interesting numerical
exercise. I managed to do that using Bayesian probability theory.

Mattias
Nils Haeck
2006-11-29 13:25:04 UTC
Permalink
Indeed, I read this article too about a year ago, so that's why I am
implementing it :)

The gain is to be found mostly in the simplified calculation of the alpha of
the result.

Nils
Eric Grange
2006-11-29 18:09:12 UTC
Permalink
That paper was written in 1995, that was more than ten years ago...

In 95, Everybody was still being impressed by 800x600 and 1027x768 @256
colors, let alone the "hicolor" 16bits per pixel modes, or the
"truecolor" at 24bits per pixel which was considered overkill by many... ^_^
Post by Mattias Andersson
He argues that premultiplied alpha is both more efficient and leads to more
elegant formulas.
However this surfaces an internal optimization at the library interface
level: color channels no longer hold actual color values, alpha and
color channels cannot be manipulated independently, etc.

That said, are there any plans for a GR128 library at some point?
I've got a couple of HDR processing that could use it, and some SSE code
I could contribute.

Eric
Nils Haeck
2006-11-29 23:13:29 UTC
Permalink
Post by Eric Grange
However this surfaces an internal optimization at the library interface
level: color channels no longer hold actual color values, alpha and color
channels cannot be manipulated independently, etc.
I don't agree, conceptually there's much to say for "pre-multiplied alpha"
(which is a wrong term anyway, the *colors* are premultiplied, not alpha).

In case of premultiplied color C' it means that it is reduced from the full
color C by a factor of alpha (0..100%):

C' = alpha * C

In this case, C' represents the "amount of color" present in that pixel,
e.g. if a pixel is 50% occluded, you can imagine this as "the bucket with
color for that pixel is also half-full".

If you want to be really precise conceptually you should even go a step
further than just alpha, and use a specific "cover" value next to the
"alpha", (so for RGB you'd have cover-alpha-R-G-B: 5 values). "cover"
represents the % visible of the pixel, and "alpha" represents the
transparency (as in glass).

There's a nice chapter in the PDF reference about this.

Esp when rendering shapes that just touch, instead of overlap, like often
seen in flash, the currently used "alpha" formulas in most graphics libs
fail miserably. The edges where two shapes meet often show a value of alpha
< 100%, while the pixel should actually be covered completely (albeit with 2
colors that mix).

This simply is the result of two consecutive blendings, e.g. for shapes that
touch and both cover 50% leading to an alpha of:

0.5 + (1 - 0.5) * 0.5 = 0.75,

where it should have been 1.0

Some libs try to offer a cloth for the bleeding with gamma correction but it
really does not help because the blending model is conceptually wrong.

My idea is to provide a special blending parameter when rendering layers,
that indicate whether a group of shapes are all on the same plane and when
sharing a border they should be rendered with a different model. Maybe this
is already done (I seem to recollect seeing some fairly recent posts in the
AGG group), but I haven't seen it yet in any Delphi library.

Nils
Eric Grange
2006-11-30 13:35:12 UTC
Permalink
Post by Nils Haeck
In this case, C' represents the "amount of color" present in that pixel,
e.g. if a pixel is 50% occluded, you can imagine this as "the bucket with
color for that pixel is also half-full".
Where I disagree is that how much a pixel is occluded or occluding isn't
a property of the pixel, but of the blending operation, where alpha can
be factor, or not, can be used "directly", or not.

Premultiplied alpha (and consequences) is a bit like the old fixed
function pipeline in 3D graphics IMO: data and its processing are finely
intertwined, data is "built for" a specific processing, and processings
are "built for" a specific data.

The "fixed" nature of premultiplication becomes apparent if you consider
other operations you could be willing to apply to your pixels, like
color saturation, applying a gamma or colorization (just to name things
I commonly use in my UI, to generate "deactivated"/"hot" variants and
their transitions, or adapt to theme colors).

Eric
Nils Haeck
2006-11-30 15:48:36 UTC
Permalink
Where I disagree is that how much a pixel is occluded or occluding isn't a
property of the pixel, but of the blending operation, where alpha can be
factor, or not, can be used "directly", or not.
That depends on how you interpret the alpha value. For full-color maps
you're right.. you see the alpha as transparency and it is nice to still
know what the color properties are in "full resolution", even if alpha is
very small.

However for intermediate blending operations, you can interpret alpha as the
cover. So if alpha=0, there is simply no shape in that pixel. How much sense
does it make to store color information then?

I think the best solution is to have a bitmap format that specifically knows
about premultiplication. The user or application will interact mostly with
the non-premultiplied bitmaps, but when rendering, the renderer will use
premultiplied bitmaps to speed things up.

There's hardly any extra costs involved, except from having to convert
full->pre before and pre->full after.

full->pre is relatively cheap, and a quick test for alpha=255 avoids doing
the multiplication.

pre->full costs more, but in almost all cases the user/app doesn't want the
result, it just wants the result displayed on a GDI device, and for that a
backdrop with alpha=255 is mostly used, so there won't be any alpha < 255
anyway.

Nils
Eric Grange
2006-12-01 11:08:27 UTC
Permalink
Post by Nils Haeck
However for intermediate blending operations, you can interpret alpha as the
cover. So if alpha=0, there is simply no shape in that pixel. How much sense
does it make to store color information then?
Because a subsequent filtering or blending operation could be operating
on the alpha channel only, and result in a non null alpha?
Post by Nils Haeck
How much sense does it make to store color information then?
Storing is a different issue IMO: when you set the color channels to
(0,0,0) because alpha is zero and premultiplication is used, you do not
avoid storing color information, you just "overwrote" the color
information with "black", or a darker shade of the color when alpha<255
Post by Nils Haeck
I think the best solution is to have a bitmap format that specifically knows
about premultiplication. The user or application will interact mostly with
the non-premultiplied bitmaps, but when rendering, the renderer will use
premultiplied bitmaps to speed things up.
Yep, that's a form of "compilation", however for it to be beneficial you
have to have enough blending operations to recoup the cost of the
premultiplication, and be sure enough that the color precision lost
won't ever be needed.
Post by Nils Haeck
There's hardly any extra costs involved, except from having to convert
full->pre before and pre->full after.
full->pre is relatively cheap, and a quick test for alpha=255
avoids doing the multiplication.
Well the conversion in itself isn't that cheap, as even if you neglect
processing cost, you'll still have to read/write a lot of memory (which
will often not fit in cache), I wouldn't be surprised if it wasn't too
far in execution time from an alpha blending.
(the test for alpha can also be done in an alpha blending, so the
differential between the two will be on pixels with intermediate alphas)
Post by Nils Haeck
pre->full costs more, but in almost all cases the user/app doesn't
want the result, it just wants the result displayed on a GDI device,
and for that a backdrop with alpha=255 is mostly used, so there won't
be any alpha < 255 anyway.
Though if the user/app ever wants a non-pre results with alpha, f.i. if
it is to be reused in another application/library, like as a texture for
3D rendering, there will be significant color artefacts on all areas
that had an alpha<255.

In the end, it's all about development time however:
If you have enough time to write and optimize both premultiplied and
non-premultiplied, then by all means you should! :)
Though if only one ought to be implemented (which is kinda what the
article suggested), premultiplication is a bit on the weak side IMO.

Eric
Mattias Andersson
2006-11-30 12:05:42 UTC
Permalink
Post by Eric Grange
That paper was written in 1995, that was more than ten years ago...
In 95, Everybody was still being impressed by 800x600 and 1027x768
@256 colors, let alone the "hicolor" 16bits per pixel modes, or the
"truecolor" at 24bits per pixel which was considered overkill by many... ^_^
Yeah, in ten years when we all have holographic displays, we will remember
this fuss about HDR... :)
Post by Eric Grange
Post by Mattias Andersson
He argues that premultiplied alpha is both more efficient and leads
to more elegant formulas.
However this surfaces an internal optimization at the library
interface level: color channels no longer hold actual color values,
alpha and color channels cannot be manipulated independently, etc.
I think you are right that this representation of color values can be
disadvantageous.

As an example: Assume that we have a tuple (A, C), where A is alpha and C is
the color value (premultiplied by A). Now, what if we want to change the
opacity to some new value B?

This would involve the following computations:

C := C * B/A;
A := B;

Without premultiplication, we would only need to update the alpha value.
Post by Eric Grange
That said, are there any plans for a GR128 library at some point?
I've got a couple of HDR processing that could use it, and some SSE
code I could contribute.
Well, it's definitely on the horizon. We are thinking about a generic
solution for custom bitmap formats.

This would require that each bitmap has its own signature that describes its
representation in memory.

I think the GDI BITMAPV4HEADER would be much too limited for this:

<http://msdn2.microsoft.com/en-us/library/ms532300.aspx>

We need something that will allow you to encode the color mode of each
plane. Perhaps we should also try to add support for non-interleaved data.

Once we have this worked out, we can start writing routines for converting
between various bitmap formats and we can add new optimized low-level
routines.

Anyhow, since this is an open source project, anyone is welcome to
contribute with source code and ideas.

If you want you could send the code to my e-mail (mattias at centaurix.com)
and then I'll look into it later.

Mattias
Eric Grange
2006-11-30 14:03:03 UTC
Permalink
Post by Mattias Andersson
Yeah, in ten years when we all have holographic displays, we will remember
this fuss about HDR... :)
There was some hardware projecting 3D images in space (projection, not
holograms iirc) demonstrated some time ago in various fairs, the
remaining issue they had was their tech allowed only to add light, no
subtract it, so the visual quality was decent only in a dark room...

But we could be getting there soon :)
Post by Mattias Andersson
Without premultiplication, we would only need to update the alpha value.
IMO premultiplication is useful as a "compiled bitmap" format of sorts,
where you know alpha blendings are the operations that will mostly
occur, and where you know the same pixels will be blended multiple times
(to amorticize the premultiplication overhead).

But performance-wise, this has to be weighted against an approach that
would "premultiply on the spot", which can certainly be made quite more
efficient than a premultiplication pass (per pixel) because of the
reduced memory bandwidth needs, so you may need more quite a few
blendings before you can recoup the premultiplication cost.

There are also 3DNow! and SSE instructions that allow to compute fairly
quickly (division included), at floating point precision and with very
fast conversions to and from integers. So it's feasible to perform
everything in a mathematically "correct" fashion, something that wasn't
possible in 1995 (at interactive speeds anyway).
Post by Mattias Andersson
Well, it's definitely on the horizon. We are thinking about a generic
solution for custom bitmap formats. [...]
Anyhow, since this is an open source project, anyone is welcome to
contribute with source code and ideas.
I'll be hovering around the newsgroups then :)
It's something I had started for GLScene2 (to preprocess HDR textures,
and postprocess HDR output mainly), but couldn't find the time for
(itself or its usage).

Eric
Mattias Andersson
2006-12-01 00:21:31 UTC
Permalink
Post by Eric Grange
IMO premultiplication is useful as a "compiled bitmap" format of
sorts, where you know alpha blendings are the operations that will
mostly occur, and where you know the same pixels will be blended
multiple times (to amorticize the premultiplication overhead).
A similar question would be whether or not to support compressed bitmap
formats. I guess for RLE encoded bitmaps you could write quite efficient
blending routines. For instance, if you have a lot of fully transparent
pixels, then you can simply skip to the next non-transparent pixel. Also it
would probably give better memory performance.
Post by Eric Grange
There are also 3DNow! and SSE instructions that allow to compute
fairly quickly (division included), at floating point precision and
with very fast conversions to and from integers. So it's feasible to
perform everything in a mathematically "correct" fashion, something
that wasn't possible in 1995 (at interactive speeds anyway).
I think it would be really cool if we could replicate the current GR32
behaviour for HDR images. Although, another problem that springs to mind is
how to efficiently convert from HDR to DIBs (i.e. for blitting to the
screen). Would it make sense to use OpenGL for this rather than the GDI?
Post by Eric Grange
Post by Mattias Andersson
Well, it's definitely on the horizon. We are thinking about a generic
solution for custom bitmap formats. [...]
Anyhow, since this is an open source project, anyone is welcome to
contribute with source code and ideas.
I'll be hovering around the newsgroups then :)
Yep, sounds like a good idea...
Post by Eric Grange
It's something I had started for GLScene2 (to preprocess HDR textures,
and postprocess HDR output mainly), but couldn't find the time for
(itself or its usage).
Where can I find more info about the GLScene2 project?

Mattias
Lord Crc
2006-12-01 00:36:31 UTC
Permalink
On Fri, 1 Dec 2006 01:21:31 +0100, "Mattias Andersson"
Post by Mattias Andersson
I think it would be really cool if we could replicate the current GR32
behaviour for HDR images
One thing I've found difficult with HDR is filtering. Filters with
negative values/lobes can produce very "unplesant" results.
Unfortunately that rules out most of the better filters :(

Still not sure how to best deal with it... I've tried some adaptive
approaches (ie use one filter for "normal range" and a positive-only
filter for pixels with "large range").

- Asbjørn
Mattias Andersson
2006-12-02 01:26:44 UTC
Permalink
Post by Lord Crc
One thing I've found difficult with HDR is filtering. Filters with
negative values/lobes can produce very "unplesant" results.
Unfortunately that rules out most of the better filters :(
Still not sure how to best deal with it... I've tried some adaptive
approaches (ie use one filter for "normal range" and a positive-only
filter for pixels with "large range").
If you use the ideal low-pass filter (i.e. the Sinc filter) on ordinary
device-referred images, then this will also produce ringing along high
intensity edges. However, I suppose this effect is amplified for
scene-referred HDR images.

I'm not sure what would be the best way to approach this. An adaptive method
might do the trick, although it does sounds a bit hackish. :)

One interesting resampling method is so called "spectral interpolation",
where you pad an image with zeroes in the frequency domain and then
transform back into the spatial domain. This will be equal to convolving
with a Sinc filter (but at cost O(n*log(n)) instead of O(n^2)).

Mattias
Lord Crc
2006-12-02 01:59:58 UTC
Permalink
On Sat, 2 Dec 2006 02:26:44 +0100, "Mattias Andersson"
Post by Mattias Andersson
If you use the ideal low-pass filter (i.e. the Sinc filter) on ordinary
device-referred images, then this will also produce ringing along high
intensity edges. However, I suppose this effect is amplified for
scene-referred HDR images.
Indeed, if you have a 1e4 magnitude edge, even a small negative lobe
will produce very "bad" results.
Post by Mattias Andersson
I'm not sure what would be the best way to approach this. An adaptive method
might do the trick, although it does sounds a bit hackish. :)
Indeed. I once asked in comp.graphics.algorithms but didn't get any
replies :(
Post by Mattias Andersson
One interesting resampling method is so called "spectral interpolation",
where you pad an image with zeroes in the frequency domain and then
transform back into the spatial domain. This will be equal to convolving
with a Sinc filter (but at cost O(n*log(n)) instead of O(n^2)).
How do you utilize this for arbitrary warping (that's where I got
stuck)?

- Asbjørn
Mattias Andersson
2006-12-02 02:05:39 UTC
Permalink
Post by Lord Crc
How do you utilize this for arbitrary warping (that's where I got
stuck)?
Ah, actually this only works if you are rescaling an image. It is useless
for per-pixel resampling.

Mattias
Lord Crc
2006-12-02 02:51:30 UTC
Permalink
On Sat, 2 Dec 2006 03:05:39 +0100, "Mattias Andersson"
Post by Mattias Andersson
Post by Lord Crc
How do you utilize this for arbitrary warping (that's where I got
stuck)?
Ah, actually this only works if you are rescaling an image. It is useless
for per-pixel resampling.
Right, that explains why I couldn't figure that part out ;)

- Asbjørn
Eric Grange
2006-12-01 13:00:31 UTC
Permalink
Post by Mattias Andersson
A similar question would be whether or not to support compressed bitmap
formats. I guess for RLE encoded bitmaps you could write quite efficient
blending routines. For instance, if you have a lot of fully transparent
pixels, then you can simply skip to the next non-transparent pixel. Also it
would probably give better memory performance.
This is what everyone was doing way back on the 8bit machines :)
I still have an RLE blender somewhere I think, though I doubt it handles
anything more than 256 color graphics and color-keyed transparency...
Post by Mattias Andersson
I think it would be really cool if we could replicate the current GR32
behaviour for HDR images. Although, another problem that springs to mind is
how to efficiently convert from HDR to DIBs (i.e. for blitting to the
screen).
With 3DNow! or SSE this is quite straightforward if you have a linear
conversion, though in the more general case, you'll have need for
exposure/gamma adjustments (rather easy), and tone mapping, which can
get rather arcane.
The holy grail is tone mapping used in HDR photography, which can make
scenes look more "real-world" like, but involve complex non linear
transformation (and in the final result, a pixel that was darker than
another originally, can end up lighter in the tone mapped version).
Post by Mattias Andersson
Would it make sense to use OpenGL for this rather than the GDI?
Recent hardware has support for HDR formats, though until recently this
support was kinda limited (f.i. you couldn't blend into an HDR buffer).
Limitations are removed one after another, but you still need very
recent hardware to accomplish it.
Post by Mattias Andersson
Where can I find more info about the GLScene2 project?
There is a little bit of it in the form of newsgroup posts, but not much
actually. Fundamentally it's about re-architecturing around the
programmable pipelines, and allowing multi-core execution.

Eric
Mattias Andersson
2006-12-02 02:02:28 UTC
Permalink
Post by Eric Grange
This is what everyone was doing way back on the 8bit machines :)
I still have an RLE blender somewhere I think, though I doubt it
handles anything more than 256 color graphics and color-keyed
transparency...
Ah, that's interesting -- I wasn't around back then. An 8bit RLE blender
might not be a bad idea for GR32 either... :)
Post by Eric Grange
With 3DNow! or SSE this is quite straightforward if you have a linear
conversion, though in the more general case, you'll have need for
exposure/gamma adjustments (rather easy), and tone mapping, which can
get rather arcane.
The holy grail is tone mapping used in HDR photography, which can make
scenes look more "real-world" like, but involve complex non linear
transformation (and in the final result, a pixel that was darker than
another originally, can end up lighter in the tone mapped version).
I need to study this concept in more detail. It seems there are a number of
different algorithms for performing tone mapping.

I will be posting to the graphics32.team newsgroup once I have some initial
outline of the new design of the library.

Mattias
Eric Grange
2006-12-04 09:51:26 UTC
Permalink
Post by Mattias Andersson
Ah, that's interesting -- I wasn't around back then. An 8bit RLE blender
might not be a bad idea for GR32 either... :)
Those 8 bits usually held more than one complete pixel, merely using
4bits paletted colors was often so wasteful you had to lower the display
resolution to get it.
On the other hand, you could actually manage to display more than 16
colors with only 1 bit per pixel... something you can't do anymore on
modern hardware ^_^

Eric
Mattias Andersson
2006-11-27 11:35:06 UTC
Permalink
Hi Eric,
Post by Eric Grange
In conclusion, no definite winner yet IMO, a cross-over between
AntiGrain's features and G32 maintainability would be a godsend, and
if it had the potential of becoming hardware accelerated one day like
the last two options, this would be perfect...
Great to hear that you're considering Graphics32 as a candidate for this. I
would definitely argue that this is a good choice for a 2D graphics library.

While you are right that it does not provide the same extensive feature set
as AGG, we (the developers) have made a lot of progress for bridging the
gap. For example, in the next version we are planning to introduce the
following new features:

- Antialiased text rendering (using polygon routines);
- Cubic bezier curves;
- Splines (with support for custom interpolation methods);
- Color gradients (using samplers);
- New transformation for mapping text to a path (similar to AGG).

If your main concern is antialiasing then I doubt you'll find any other
library that provides an as well thought out implementation as G32.

However, the term 'antialiasing' might be a bit ambivalent. It would
probably be better to try to conceptualize this into three different
categories:

- Polygonal antialiasing
- Resampling
- Supersampling

I think G32 provides an elaborate solution for each one of these categories.
What makes it stand out from other libraries though, is its flexible and
extensible design. While many libraries provide a set of hard-coded
resampling methods, G32 allows you to easily implement and register your own
resampling classes. Similarly, you can develop your own "sampling classes"
for acquiring color samples from an arbitrary process (e.g. a raytracer). By
attaching a supersampler you will then automatically get high quality
anti-aliased output.

We have discussed various solutions for adding support for hardware
accelerated modules (and I think this is definitely something that we should
explore in greater depth). Also we've discussed what routines could benefit
from being parallelized on multi-core systems. I think a multithreaded
rasterizer class was posted in the Graphics32 newsgroup just a few weeks
ago.

Anyway, if you have ideas about how to integrate hardware accelerated
modules, that would be very much appreciated! :)

Mattias
_______________
Team Graphics32
http://www.graphics32.org
Eric Grange
2006-11-27 14:18:59 UTC
Permalink
Post by Mattias Andersson
Great to hear that you're considering Graphics32 as a candidate for this.
I've always had it around :)
Post by Mattias Andersson
- Antialiased text rendering (using polygon routines);
This is the area where we have the "most" requirements here, as beyond
text output, there is a need to have some basic text layout capability
too. Nothing extreme, just DrawText-like: to align & justify, word wrap,
handle tabs, determine bounding rectangles... that kind of stuff.

Another requirement is printing (to a paper printer or a virtual one,
like for PDF), but that's another can of worms.
Post by Mattias Andersson
Anyway, if you have ideas about how to integrate hardware accelerated
modules, that would be very much appreciated! :)
The most problematic bit IME with hardware accelerated rendering is that
there are many drivers whose output quality you "cannot trust",
especially in the business machine range of hardware (aka integrated
graphics chipsets), edge anti-aliasing or line continuity are already
quite troublesome to get "right" all the time on all drivers.

Recently bumped on a page that describes the typical issues with
explicit pictures
http://homepage.mac.com/arekkusu/bugs/invariance/index.html

I had a quick go at implementing something similar to
http://homepage.mac.com/arekkusu/bugs/invariance/TexAA.html
Enough to confirm it works, but not near enough to get something usable
as a library.

Eric
Mattias Andersson
2006-11-27 17:43:47 UTC
Permalink
Post by Eric Grange
Post by Mattias Andersson
- Antialiased text rendering (using polygon routines);
This is the area where we have the "most" requirements here, as beyond
text output, there is a need to have some basic text layout capability
too. Nothing extreme, just DrawText-like: to align & justify, word
wrap, handle tabs, determine bounding rectangles... that kind of
stuff.
As a small teaser of the new text rendering routine and the new path
transformation, I've uploaded this demo:

<http://developer.centaurix.com/pub/textfx.zip>

Credits to Michael Hansen for his nice perlin noise effect.

Currently we have only added basic routines for converting glyph outlines to
polygons and for rendering non-wrapped text, but it should be fairly easy to
extend it with the same layout features that are supported by DrawText. Also
I think it would make sense to derive a new font class for supporting custom
kerning etc.

As an aside: I've been working on another project that will allow you to
build your own G32-based user interfaces (similar to Delphi's form
designer). I've added a number of different light-weight 'cell' classes that
can be both aligned and autosized. For instance, there are text cells, image
cells, effect cells and grid cells. Also I've written special routines for
reading and writing cells to streams.
Post by Eric Grange
Post by Mattias Andersson
Anyway, if you have ideas about how to integrate hardware accelerated
modules, that would be very much appreciated! :)
The most problematic bit IME with hardware accelerated rendering is
that there are many drivers whose output quality you "cannot trust",
especially in the business machine range of hardware (aka integrated
graphics chipsets), edge anti-aliasing or line continuity are already
quite troublesome to get "right" all the time on all drivers.
One idea would be to develop a framework that supports both hardware
accelerated and CPU-based rendering. That way one could always switch to the
CPU if certain chipset features are not present.
Post by Eric Grange
Recently bumped on a page that describes the typical issues with
explicit pictures
http://homepage.mac.com/arekkusu/bugs/invariance/index.html
I had a quick go at implementing something similar to
http://homepage.mac.com/arekkusu/bugs/invariance/TexAA.html
Enough to confirm it works, but not near enough to get something
usable as a library.
Looks very promising. It would be interesting to try it out in Delphi and do
some benchmarking. :)

Mattias
AntonE
2006-11-27 19:20:56 UTC
Permalink
Cool!
Post by Mattias Andersson
Post by Eric Grange
Post by Mattias Andersson
- Antialiased text rendering (using polygon routines);
This is the area where we have the "most" requirements here, as beyond
text output, there is a need to have some basic text layout capability
too. Nothing extreme, just DrawText-like: to align & justify, word
wrap, handle tabs, determine bounding rectangles... that kind of
stuff.
As a small teaser of the new text rendering routine and the new path
<http://developer.centaurix.com/pub/textfx.zip>
Credits to Michael Hansen for his nice perlin noise effect.
Currently we have only added basic routines for converting glyph outlines
to polygons and for rendering non-wrapped text, but it should be fairly
easy to extend it with the same layout features that are supported by
DrawText. Also I think it would make sense to derive a new font class for
supporting custom kerning etc.
As an aside: I've been working on another project that will allow you to
build your own G32-based user interfaces (similar to Delphi's form
designer). I've added a number of different light-weight 'cell' classes
that can be both aligned and autosized. For instance, there are text
cells, image cells, effect cells and grid cells. Also I've written special
routines for reading and writing cells to streams.
Post by Eric Grange
Post by Mattias Andersson
Anyway, if you have ideas about how to integrate hardware accelerated
modules, that would be very much appreciated! :)
The most problematic bit IME with hardware accelerated rendering is
that there are many drivers whose output quality you "cannot trust",
especially in the business machine range of hardware (aka integrated
graphics chipsets), edge anti-aliasing or line continuity are already
quite troublesome to get "right" all the time on all drivers.
One idea would be to develop a framework that supports both hardware
accelerated and CPU-based rendering. That way one could always switch to
the CPU if certain chipset features are not present.
Post by Eric Grange
Recently bumped on a page that describes the typical issues with
explicit pictures
http://homepage.mac.com/arekkusu/bugs/invariance/index.html
I had a quick go at implementing something similar to
http://homepage.mac.com/arekkusu/bugs/invariance/TexAA.html
Enough to confirm it works, but not near enough to get something
usable as a library.
Looks very promising. It would be interesting to try it out in Delphi and
do some benchmarking. :)
Mattias
Chris Trueman
2006-11-27 21:12:48 UTC
Permalink
I like it. Cute.



Chris.
Eric Grange
2006-11-28 06:46:16 UTC
Permalink
Post by Mattias Andersson
As a small teaser of the new text rendering routine and the new path
Nice!
Now the infamous question: any timeframe? ;)
Or is it already somewhere in the SourceForge CVS or SVN repository?

Eric
Mattias Andersson
2006-11-28 12:28:14 UTC
Permalink
Post by Eric Grange
Post by Mattias Andersson
As a small teaser of the new text rendering routine and the new path
Nice!
Now the infamous question: any timeframe? ;)
Or is it already somewhere in the SourceForge CVS or SVN repository?
Well, I can't say that we have a timeframe at the moment. I'm working on my
M.Sc. thesis and Andre is busy with other projects, so the progress rate
isn't very high.

We have a Team Coherence repository (http://www.teamcoherence.com), but it
isn't configured for anonymous access. What you can do is to send an e-mail
to team AT graphics32.org and ask for your own personal account.

Mattias
Eric Grange
2006-11-29 07:12:40 UTC
Permalink
Post by Mattias Andersson
We have a Team Coherence repository (http://www.teamcoherence.com)
I don't have Team Coherence... I'll wait for when it's bakc into CVS/SVN. :)

Eirc
Mattias Andersson
2006-11-29 11:11:06 UTC
Permalink
Post by Eric Grange
Post by Mattias Andersson
We have a Team Coherence repository (http://www.teamcoherence.com)
I don't have Team Coherence... I'll wait for when it's bakc into CVS/SVN. :)
I've found out that there's a new IDE plug-in for SVN. This could be a very
interesting alternative to TC.

Some screenshots can be found here:

http://tondrej.blogspot.com/2006/11/delphisvn-for-delphi-7.html

Mattias
Eric Grange
2006-11-29 17:26:57 UTC
Permalink
Post by Mattias Andersson
I've found out that there's a new IDE plug-in for SVN. This could be a very
interesting alternative to TC.
I've converted long ago to Tortoise and shell based VCS, I'm not sure
I'll ever revert to using an IDE-based one willingly :)

Eric
TOndrej
2006-12-01 07:42:14 UTC
Permalink
Post by Eric Grange
I've converted long ago to Tortoise and shell based VCS, I'm not sure
I'll ever revert to using an IDE-based one willingly :)
Anyway, delphisvn is not a full-featured Subversion client (wasn't
intended to be) and can hardly be used as your only Subversion client.
All it aims to do is provide IDE integration to make some of the often
performed Subversion-related actions easier.

For example, I have installed and regularly use TortoiseSVN, svn.exe and
delphisvn. Tortoise is great, I love it. The command line client is
useful for automated builds (e.g. from specific revisions). I use
delphisvn mainly for quick checking for modifications and the integrated
history/diff view.

HTH
TOndrej
Zoltan Komaromy
2006-12-06 11:23:57 UTC
Permalink
Hi Mattias!
Post by Mattias Andersson
As an aside: I've been working on another project that will allow you to
build your own G32-based user interfaces (similar to Delphi's form
designer). I've added a number of different light-weight 'cell' classes that
can be both aligned and autosized. For instance, there are text cells, image
cells, effect cells and grid cells. Also I've written special routines for
reading and writing cells to streams.
Do you have any screenshot or demo for this? It sounds very interesting...

Regards
Zoltan
Mattias Andersson
2006-12-07 16:31:34 UTC
Permalink
Post by Zoltan Komaromy
Do you have any screenshot or demo for this? It sounds very
interesting...
I will probably have some screenshots ready in a few days. Stay tuned.

Cheers,
Mattias
Zoltan Komaromy
2006-12-08 09:48:32 UTC
Permalink
Post by Mattias Andersson
Post by Zoltan Komaromy
Do you have any screenshot or demo for this? It sounds very
interesting...
I will probably have some screenshots ready in a few days. Stay tuned.
Cheers,
Mattias
Hi,

Waiting for that!! :))

Zoltan

Edwin
2006-12-01 08:43:48 UTC
Permalink
Post by Eric Grange
I'm currently in the process of evaluating anti-aliased 2D graphics alternatives
under Delphi for the revamp of an internal library, below are those I found, if
anyone knows one that isn't there, I would be grateful for links :)
Keep in mind my comments below are for antialiased graphics only, with software
- low to medium speed, high memory usage
- low to high quality (but usually only "low" is practical)
- easy to maintain
- medium speed, some memory leaks to be avoided
- medium to low quality
- easy to maintain (some deployment issues to watch though)
* G32 : (www.g32.org)
- medium to high speed
- medium quality, but feature set somewhat limited
- easy to maintain
* AntiGrain : (www.aggpas.org)
- medium speed
- high quality, arguably the richest feature set
- complex to maintain
- very slow speed when hardware not available (esp. for AA modes)
- medium quality, built-in 2D feature set limited (text output...)
- driver-specific quirks complicate maintainance
- low to medium speed when hardware not available
- medium quality, rich feature set but with a retained mode renderer
- strong deployment issues for the foreseeable future
In conclusion, no definite winner yet IMO, a cross-over between AntiGrain's
features and G32 maintainability would be a godsend, and if it had the potential
of becoming hardware accelerated one day like the last two options, this would
be perfect... WPF is somewhat close, but dependencies and deployment issues are
overwhelming, especially with the restrictions on pre-Vista OSes.
Any other options I would have missed?
Eric
Hi Eric,
There is a AGG-dirived project called AggPlus which simulate GDI+
interface and implemented by AGG, however it's a C++ lib, I can't
tested... It would be great if anybody develop a lib like that but for
delphi :)

And I'm currently develping a mind mapping software which using gr32 and
an extension from a Russian programmer, so far so that, except that I
don't know how to draw a spling with a arrow cap, could anybody give me a
hint? Thanks!

To Nils,
Sounds great! I like the pdf/svg import/expert, hope you to implemented to
allow little coding to output to the pdf/svg/emf, just like what GDI+
dose. and waiting for the first release to have a try!

To Mattias,
thanks very much for the greate graphics32 library! and I am very excited
to hear the info about the next release of gr32! Hope I can have a look on
it ASAP!

--------------
Edwin
Best Regards,
Mind Visualizer--Productive Mind Mapping Software
http://www.mindmapware.com



--- posted by geoForum on http://delphi.newswhat.com
Nils Haeck
2006-12-01 11:54:30 UTC
Permalink
Post by Edwin
To Nils,
Sounds great! I like the pdf/svg import/expert, hope you to implemented to
allow little coding to output to the pdf/svg/emf, just like what GDI+
dose. and waiting for the first release to have a try!
GDI+ doesn't output to PDF or SVG, just EMF. And EMF is rather weak.. what
to think of blurring filters for soft shadows and such things.

Nils
crusoe
2006-12-07 09:37:59 UTC
Permalink
If you don't mind a dll, cairo is being actively developed and here is some
older proof of concept
http://lists.freedesktop.org/archives/cairo/2004-October/002032.html
Post by Eric Grange
I'm currently in the process of evaluating anti-aliased 2D graphics alternatives
under Delphi for the revamp of an internal library, below are those I found, if
anyone knows one that isn't there, I would be grateful for links :)
Eric Grange
2006-12-07 10:22:05 UTC
Permalink
Post by crusoe
If you don't mind a dll, cairo is being actively developed and here is some
older proof of concept
http://lists.freedesktop.org/archives/cairo/2004-October/002032.html
The links in this post give me Error 404.
I tried http://cairographics.org/, but most of the pages beyond the main
page timed out or gave me errors... the bindings page was one of the few
that I could access to, but Delphi wasn't mentionned...

Eric
crusoe
2006-12-07 12:45:36 UTC
Permalink
Post by Eric Grange
Post by crusoe
http://lists.freedesktop.org/archives/cairo/2004-October/002032.html
The links in this post give me Error 404.
Sorry, I did not notice and I don't know where this file could be.
Loading...