Scale a vector

  • Thread starter Thread starter Matteo Migliore
  • Start date Start date
M

Matteo Migliore

Hi!

I've to scale a vector of numbers of size N to a vector of size M.
The trasformation is like a zoom on images but I need on a vector.

The second size can be M >= N or M <= N, M > 0.

The value for each element can be from 0 to 255.

Is there some methods that can I use?

I don't need only a resize.

Thx! ;-)
Matteo Migliore.
 
Matteo said:
Hi!

I've to scale a vector of numbers of size N to a vector of size M.
The trasformation is like a zoom on images but I need on a vector.

I'm not really clear on the exact meaning of your description; is "N"
and "M" the magnitude of the vector? Do you already know those values,
or do they need to be calculated as well? How many dimensions does the
vector have?

Anyway, it seems to me that if you're just doing scaling, it would be
easiest to just write a simple method that does the work.

However, there is a Matrix class; if you already know M and N, and your
vector is a 2-dimensional vector, you could initialize a Matrix instance
with the scaling factor of M/N along the diagonal and then use the
Matrix.TransformVectors() method to scale your vector.

If the above doesn't help, maybe you could be more specific.

Pete
 
Peter said:
I'm not really clear on the exact meaning of your description; is "N"
and "M" the magnitude of the vector? Do you already know those
values, or do they need to be calculated as well? How many
dimensions does the vector have?

N is the size of the vector, M is the wanted size after the processing
of the new vector.

With vector I intend an array of int: int[] vector = new int[] { 10, 15, 17,
30, 200 };

In this case N = 5 and for example I need a new vector of size 8 that
redistributes
the values. Think that is a line:
P1(1, 10) - P2(2, 15) - P3(3, 17) - P4(4, 30) - P5(5, 200)

I need a new line:
P1(1, ?) - P2(2, ?) - P3(3, ?) - P4(4, ?) - P5(5, ?) - P6(6, ?) - P7(7, ?) -
P8(8, ?)

Or a new vector of 4 elements, or 3 etc...
Anyway, it seems to me that if you're just doing scaling, it would be
easiest to just write a simple method that does the work.

However, there is a Matrix class; if you already know M and N, and
your vector is a 2-dimensional vector, you could initialize a Matrix
instance with the scaling factor of M/N along the diagonal and then
use the Matrix.TransformVectors() method to scale your vector.

No, it's not a 2-dimensional vector, it's a 1-dimension vector. But can I
use Matrix.TransformVectors() .
If the above doesn't help, maybe you could be more specific.

Pete

Thx a lot! ;-)
Matteo Migliore.
 
Matteo said:
N is the size of the vector, M is the wanted size after the processing
of the new vector.

IMHO, we are having some problems with your terminology. For example:
With vector I intend an array of int: int[] vector = new int[] { 10, 15,
17, 30, 200 };

In this case N = 5 and for example I need a new vector of size 8 that
redistributes the values.

Based on the above statement, M and N are the _dimensions_ of your
vectors. Assuming you want to continue calling the two arrays vectors,
of course.
Think that is a line:
P1(1, 10) - P2(2, 15) - P3(3, 17) - P4(4, 30) - P5(5, 200)

That's not a line. It's a bunch of lines, approximating a curve.
I need a new line:
P1(1, ?) - P2(2, ?) - P3(3, ?) - P4(4, ?) - P5(5, ?) - P6(6, ?) - P7(7,
?) - P8(8, ?)

If you're treating the "vector" as a collection of second coordinates in
a 2-dimensional space (which is what the above implies), why are you not
adjusting the first coordinate in the output sequence as well?

For example, if your original "line" starts at (1, 10) and ends at (5,
200) wouldn't you also want the resulting "line" to also start at (1,
10) and end at (5, 200)?

As far as the specific example goes, what would those "?" be replaced
with? Do you have a specific relationship between the input and output
data in mind? Or is part of your question intended to solicit opinions
regarding what an appropriate relationship would be?
[...]
No, it's not a 2-dimensional vector, it's a 1-dimension vector. But can
I use Matrix.TransformVectors() .

You cannot use TransformVectors(), of that much I am sure.

As for what the data is, it appears to me that it's not a 2-dimensional
vector, nor is it a 1-dimensional vector. The data is a 1-dimensional
array, which _possibly_ could be treated as an M-dimensional vector but
in fact it doesn't really appear to me that your data really is a vector
after all.

As for specifically how to "scale" the input data to create the desired
output data, I'm afraid I still don't see enough information to explain
what the intended results are. It seems that if you could provide at
least one concrete example of both the input and the output, that would
go a long way toward helping explain the problem better.

Pete
 
Peter said:
IMHO, we are having some problems with your terminology. For example:

You're right my teminology is wrong :-).
Sorry! :-P
With vector I intend an array of int: int[] vector = new int[] { 10,
15, 17, 30, 200 };

In this case N = 5 and for example I need a new vector of size 8 that
redistributes the values.

Based on the above statement, M and N are the _dimensions_ of your
vectors. Assuming you want to continue calling the two arrays
vectors, of course.
Think that is a line:
P1(1, 10) - P2(2, 15) - P3(3, 17) - P4(4, 30) - P5(5, 200)

That's not a line. It's a bunch of lines, approximating a curve.

You're right again, suppose that it's a curve.
If you're treating the "vector" as a collection of second coordinates
in a 2-dimensional space (which is what the above implies), why are
you not adjusting the first coordinate in the output sequence as well?

For example, if your original "line" starts at (1, 10) and ends at (5,
200) wouldn't you also want the resulting "line" to also start at (1,
10) and end at (5, 200)?

No, it's not a closed curve. It's only an array of int that I need to scale
to a less or great array. Think that the array are Y-coordinates and you've
to zoom (increasing or decreasing the size) of this curve.
As far as the specific example goes, what would those "?" be replaced
with? Do you have a specific relationship between the input and
output data in mind? Or is part of your question intended to solicit
opinions regarding what an appropriate relationship would be?

Mmmm I post a prototype of function that I need:
------------------------------
public int[] Scale(int[] array, int newSize) {
if (array.Lenght <= 3)
return array;

//Otherwise calculate the new values
int[] scaled = new int[newSize];

//Here is the problem! :-)
}
------------------------------

I've to preserve the curve "morphology".
[...]
No, it's not a 2-dimensional vector, it's a 1-dimension vector. But
can I use Matrix.TransformVectors() .

You cannot use TransformVectors(), of that much I am sure.

As for what the data is, it appears to me that it's not a
2-dimensional vector, nor is it a 1-dimensional vector. The data is
a 1-dimensional array, which _possibly_ could be treated as an
M-dimensional vector but in fact it doesn't really appear to me that
your data really is a vector after all.

My data are not real! I've to calculate from a real image before (read
down...).
As for specifically how to "scale" the input data to create the
desired output data, I'm afraid I still don't see enough information
to explain what the intended results are. It seems that if you could
provide at least one concrete example of both the input and the
output, that would go a long way toward helping explain the problem
better.

In pratical I need to resize RGB projections (horizontal and vertical)
of images from XxY to a TxZ, where T and Z must be fixed.

A RGB projection it's only a int[] array, the lentgh is X for horizontal
and Y for vertical projections. The range of values of elements of the
arrays is 0..255 (the luminance).

I hope to be more clear, sorry for my english :-).

Thx!!!!!! ;-)
Matteo Migliore.
 
Matteo said:
[...]
For example, if your original "line" starts at (1, 10) and ends at (5,
200) wouldn't you also want the resulting "line" to also start at (1,
10) and end at (5, 200)?

No, it's not a closed curve. It's only an array of int that I need to scale
to a less or great array. Think that the array are Y-coordinates and you've
to zoom (increasing or decreasing the size) of this curve.

I'm not saying it's a closed curve. I'm trying to establish what the
input and output data look like. Preserving the end-points (1,10 and
(5,200) (using your previous "sample" data) doesn't close the curve. It
just anchors it in the same geographical location it previously had.

As far as "zoom" goes...typically that's implemented by just scaling the
entire input. If you've got 2-D data ((x,y) coordinate pairs), you
simply apply the same scale factor to both the x and y component;
there's not any specific need to create or remove data points.

In some situations, it's desirable to "smooth" out the zooming when
increasing the size by adding new points, or to enhance efficiency when
decreasing the size by removing points. Are you trying to do this? If
so, are you _also_ scaling the coordinates as well, or is this really
just an exercise in smoothing/roughening the original curve?
Mmmm I post a prototype of function that I need:
------------------------------
public int[] Scale(int[] array, int newSize) {
if (array.Lenght <= 3)
return array;

//Otherwise calculate the new values
int[] scaled = new int[newSize];

//Here is the problem! :-)
}

In the above function, why do you ignore the "newSize" parameter if the
input array is shorter than 3 elements? Why is it acceptable for the
method to return an array of a different size than requested? And if
this is acceptable, can that be taken advantange of in the more general
solution? That is, is it acceptable in other situations for the method
to return an array of a length different than asked for?
[...]
As for specifically how to "scale" the input data to create the
desired output data, I'm afraid I still don't see enough information
to explain what the intended results are. It seems that if you could
provide at least one concrete example of both the input and the
output, that would go a long way toward helping explain the problem
better.

In pratical I need to resize RGB projections (horizontal and vertical)
of images from XxY to a TxZ, where T and Z must be fixed.

Whether the data you posted are real or not, it would still be useful to
have an example of data that reflect the complete algorithm you're
looking for. So far, all you've shown is a hypothetical input array;
without seeing a corresponding hypothetical output array, it's very
difficult to visualize what exactly you're trying to do.

Pete
 
Whether the data you posted are real or not, it would still be useful
to have an example of data that reflect the complete algorithm you're
looking for. So far, all you've shown is a hypothetical input array;
without seeing a corresponding hypothetical output array, it's very
difficult to visualize what exactly you're trying to do.

Mmm, it's hard for me to explain in a post what I need.

The algorithm that I've to develop simply create an array of Z elements
from an array of T elements, preserving the curve morphology.

I obtain the array processing an image.

I calculate the horizontal RGB projection and I need to scale
it to a fixed size array. The originally array size is the width of the
image.

I need to compare the RGB project of an image to another, but the resolution
of images can be different, so I've to build fixed array size, to compare.

Clear now?

Thx!!!
Matteo Migliore.
 
Matteo said:
[...]
Clear now?

Nope, sorry. Like I said, you really should post an example of input
data and output data that would result. You can try to describe the
process until you're blue in the face, the fact is there's nothing so
useful as having a concrete example to talk about.

If you do not even know what the output data will look like, then IMHO
you should focus on that first, rather than some specific
implementation. But in that case, you will at least need to come up
with a more easily-undersood way of describing what your goals are.

In particular, while I understand the idea of "preserving the curve
morphology", since I don't know what the curve you're talking about is,
that still doesn't help me. The term "projection" may in fact describe
what you're doing, it's not being used in a way with which I'm familiar.
You should try to simplify your terminology, so that we don't need any
specialized knowledge of your application to comprehend what you're
trying to do.

We're not dumb people here, but we're not necessarily conversant in all
technical fields, and we're not psychic either. :)

Pete
 
Peter has tried to get you to elaborate the problem. It now sounds that you
are trying to do image processing, and either stretch or shrink an image
based on an aspect ratio. This is quite different than stretching a
one-dimensional array, or refinining/simplifying a cartesian polygon or curve
as I thought you initially might be going toward.

If you are working image processing, my understanding from collegues, is the
value of any new pixel is based on the value of many surrounding points, not
just the ones immediately next to it. When throwing away points, there is a
similar issue, for deciding which value to throw away. You might throw away
the important pixel.
 
Family said:
Peter has tried to get you to elaborate the problem. It now sounds
that you are trying to do image processing, and either stretch or
shrink an image based on an aspect ratio. This is quite different
than stretching a one-dimensional array, or refinining/simplifying a
cartesian polygon or curve as I thought you initially might be going
toward.

Yes, and i thank Peter tooooo much for his time! :-)

But no, I need to resize a one-dimensional array.
If you are working image processing, my understanding from collegues,
is the value of any new pixel is based on the value of many
surrounding points, not just the ones immediately next to it. When
throwing away points, there is a similar issue, for deciding which
value to throw away. You might throw away the important pixel.

Suppose that you have three images: 640x480, 800x600, 1024x768.
The image is the same but at different resolution.

Here is a method to calculate the RGB projections, horizontal project in
this case:
--------------------------
public unsafe int[] GetHorizontalRGBProjection(Bitmap bitmap){

}

public int[] Scale(int[] array, int size){

}
--------------------------

This function calculate the middle luminance for each column (X coordinate)
in the image and return an array of middle luminances, so the array lenght
is the width of the image.

Same thing for the vertical projection, but the calculate is on rows!

Ok, now I've to compare horizontal RGB projection for image A, B and C.

The array lenght must be the same, so I've to scale RGB proj. of A to 100
elements,
RGB proj. of B to 100 elements and the same for C. 100 is random selected,
it could be 640
(the most minum width for an image). If the image width is smaller than
100px the function must
Scale the array to 640.

So at the end the code is:
----------------------------------
Bitmap a = ...; //640x480
Bitmap b = ...; //800x600
Bitmap c = ...; //1024x768
int[] horizontalRGBProjectionA = GetHorizontalRGBProjection(a); //array
lenght 640
int[] horizontalRGBProjectionB = GetHorizontalRGBProjection(b); //array
lenght 800
int[] horizontalRGBProjectionC = GetHorizontalRGBProjection(c); //array
lenght 1024

horizontalRGBProjectionA = Scale(horizontalRGBProjectionA, 640); //array
lenght 640
horizontalRGBProjectionB = Scale(horizontalRGBProjectionB, 640); //array
lenght 640
horizontalRGBProjectionC = Scale(horizontalRGBProjectionC, 640); //array
lenght 640
----------------------------------

Now I can compare the arrays.

Thx!!
Matteo Migliore.
 
Matteo said:
[...]
But no, I need to resize a one-dimensional array.

For what it's worth, you still haven't provided any guidance regarding
_how_ you want the data in the array to be calculated. Is that because
you yourself do not know?
[...]
This function calculate the middle luminance for each column (X coordinate)
in the image and return an array of middle luminances, so the array lenght
is the width of the image.

Same thing for the vertical projection, but the calculate is on rows!

Ok, now I've to compare horizontal RGB projection for image A, B and C.

The array lenght must be the same, so I've to scale RGB proj. of A to
100 elements,
RGB proj. of B to 100 elements and the same for C. 100 is random
selected, it could be 640
(the most minum width for an image). If the image width is smaller than
100px the function must
Scale the array to 640.

I am not clear on what you mean by "middle". But, whether that's the
median value of the column or row of pixels, the average, or literally
the value of the middle pixel, I think the solution is basically the same.

What you are looking for is essentially a one-dimensional image scaling.
So, a couple of thoughts come to mind:

1) You may find that it makes more sense to scale the input bitmaps
first, and then do the comparison on data calculated from the scaled
input. This is less efficient because you wind up scaling data that you
don't necessarily use, but

a) it's not clear even now from your description that you
really don't want to use the data (even though you may believe that you
don't, it's possible that because you are scaling your data, you really
do want the scaling to take into account all neighboring pixels, not
just those in a specific direction), and

b) because it makes the implementation of your solution
simpler, the reduced efficiency may be a worthwhile price for simpler code.

2) If you don't want to scale the input first, I would say that
instead of treating the data as an array, you should generate new
bitmaps, as wide or high as the input bitmap, and one pixel in size in
the other direction. This way, you can take advantage of the built-in
..NET image processing functionality. How best to do this would depend
on what is meant by "middle".

In option #2, if you are literally taking a the middle pixel from a
column or row, then you should just copy a complete row or column
(respectively) of pixels into a new bitmap and then scale that bitmap
and extract the luminosity from the resulting bitmap. If you have some
other meaning of "middle", you'll have to convert your luminosity data
into a format that can be treated as a regular Bitmap instance (for
example, a plain 24-bit-per-pixel RGB Bitmap, where each pixel is a gray
value based on the lumonisity value). Then you just scale that bitmap
in a single dimension and extract back out the values (which should
still be gray values, making it trivial to convert back to luminosity).

In either case, the scaling will be done with the Bitmap and Graphics
classes. You'll create two Bitmap instances: the input one and the
output one. Then you'll use Graphics.FromImage() to get a Graphics
instance for the output Bitmap instance. Then you'll use the
Graphics.DrawImage() method to copy the data from the input Bitmap to
the output Bitmap, providing source and destination rectangles that
correspond to the full size of the input and output Bitmaps,
respectively. The DrawImage() method will then apply some image-scaling
algorithm to the data; which one is specifically used will depend on the
InterpolationMode property of the Graphics instance used for the drawing.

Hope that helps.

Pete
 
Peter said:
Matteo said:
[...]
But no, I need to resize a one-dimensional array.

For what it's worth, you still haven't provided any guidance regarding
_how_ you want the data in the array to be calculated. Is that
because you yourself do not know?

I arrived to the conclusion that I can use the straight line equation
to interpolate value in X1 to X2 and value X3 to X4 etc... If the size
parameter
is greater than the width of the original bitmap.
[...]
This function calculate the middle luminance for each column (X
coordinate) in the image and return an array of middle luminances,
so the array lenght is the width of the image.

Same thing for the vertical projection, but the calculate is on rows!

Ok, now I've to compare horizontal RGB projection for image A, B and
C. The array lenght must be the same, so I've to scale RGB proj. of A to
100 elements,
RGB proj. of B to 100 elements and the same for C. 100 is random
selected, it could be 640
(the most minum width for an image). If the image width is smaller
than 100px the function must
Scale the array to 640.

I am not clear on what you mean by "middle". But, whether that's the
median value of the column or row of pixels, the average, or literally
the value of the middle pixel, I think the solution is basically the
same.

Sorry, I mean the median luminosity value for each column for horizontal RGB
and for each row for vertical RGB.
What you are looking for is essentially a one-dimensional image
scaling. So, a couple of thoughts come to mind:

1) You may find that it makes more sense to scale the input
bitmaps first, and then do the comparison on data calculated from the
scaled input. This is less efficient because you wind up scaling
data that you don't necessarily use, but

a) it's not clear even now from your description that you
really don't want to use the data (even though you may believe that
you don't, it's possible that because you are scaling your data, you
really do want the scaling to take into account all neighboring pixels,
not
just those in a specific direction), and

b) because it makes the implementation of your solution
simpler, the reduced efficiency may be a worthwhile price for simpler
code.

No I can't scale images beacuse performance are the key,
so I've to scale arrays and store thay in a Dictionary to compare they.

So I can't resize images, that was my first idea :-).
2) If you don't want to scale the input first, I would say that
instead of treating the data as an array, you should generate new
bitmaps, as wide or high as the input bitmap, and one pixel in size in
the other direction. This way, you can take advantage of the built-in
.NET image processing functionality. How best to do this would depend
on what is meant by "middle".

In option #2, if you are literally taking a the middle pixel from a
column or row, then you should just copy a complete row or column
(respectively) of pixels into a new bitmap and then scale that bitmap
and extract the luminosity from the resulting bitmap. If you have
some other meaning of "middle", you'll have to convert your
luminosity data into a format that can be treated as a regular Bitmap
instance (for example, a plain 24-bit-per-pixel RGB Bitmap, where
each pixel is a gray value based on the lumonisity value). Then you
just scale that bitmap in a single dimension and extract back out the
values (which should
still be gray values, making it trivial to convert back to
luminosity).
In either case, the scaling will be done with the Bitmap and Graphics
classes. You'll create two Bitmap instances: the input one and the
output one. Then you'll use Graphics.FromImage() to get a Graphics
instance for the output Bitmap instance. Then you'll use the
Graphics.DrawImage() method to copy the data from the input Bitmap to
the output Bitmap, providing source and destination rectangles that
correspond to the full size of the input and output Bitmaps,
respectively. The DrawImage() method will then apply some
image-scaling algorithm to the data; which one is specifically used
will depend on the InterpolationMode property of the Graphics
instance used for the drawing.

No I can't :-). So I've to scale the arrays (I know, I'm repetitive :-D)

The straight line equation and interpolation is a good mode,
but it not simple. But I think that is the only way :-(.

I found this article on CodeProject:
http://www.codeproject.com/useritems/Douglas-Peucker_Algorithm.asp

This algorithm solve a problem similar to mine, but it only
*reduce* points number and accept a tollerance parameter, not how many
points
to have :-(.
Hope that helps.

Pete

I don't know how to thank you!!
(But my problem remain).

I'll upload my solution on CodePlex ;-)

It's an application to find duplicated images :-D
Matteo Migliore.
 
Matteo said:
Peter said:
Matteo said:
[...]
But no, I need to resize a one-dimensional array.

For what it's worth, you still haven't provided any guidance
regarding _how_ you want the data in the array to be calculated. Is
that because you yourself do not know?

I arrived to the conclusion that I can use the straight line equation
to interpolate value in X1 to X2 and value X3 to X4 etc... If the size
parameter
is greater than the width of the original bitmap.

Yes, that's one way.

Your vector contains a sampling of a function of one variable (the function
is "median luminosity by column in an image") at some frequency (e.g 300
pixel/inch), and what you're wanting to do is resample that function at a
different frequency (e.g. 72 pixels/inch). Doing linear interpolation
between adjacent points models the function as a series of straight line
segments. That might be good enough, but there are many other
interpolataion functions possible. For example, you could use quadratic or
cubic interpolation (e.g. modelling the base function as a series or
parabolic or cubic curve segments). You really need to know something about
the true nature of the data to know which interpolation function will give
the most useful results.

For high quality 2D image scaling, bi-quadratic or bi-cubic interpolation is
usually used, so I'd propose that quadratic or cubic interpolation is
probably what you want.

-cd
 
Your vector contains a sampling of a function of one variable (the
function is "median luminosity by column in an image") at some
frequency (e.g 300 pixel/inch), and what you're wanting to do is
resample that function at a different frequency (e.g. 72
pixels/inch). Doing linear interpolation between adjacent points
models the function as a series of straight line segments. That
might be good enough, but there are many other interpolataion
functions possible. For example, you could use quadratic or cubic
interpolation (e.g. modelling the base function as a series or
parabolic or cubic curve segments). You really need to know
something about the true nature of the data to know which
interpolation function will give the most useful results.

For high quality 2D image scaling, bi-quadratic or bi-cubic
interpolation is usually used, so I'd propose that quadratic or cubic
interpolation is probably what you want.

OK, I've to study on it. Good informations. Thanks Carl!

Thanks all guys!!

During this night (I live in Italy at the moment) I develop
the application and I published on CodePlex here:
http://www.codeplex.com/SimilarImagesFinder

Four hours of hard work! :-D

Have fun! :-).
I come back soon!

Thx!!!!
Matteo Migliore.
 
Carl said:
[...]
For high quality 2D image scaling, bi-quadratic or bi-cubic interpolation is
usually used, so I'd propose that quadratic or cubic interpolation is
probably what you want.

But wouldn't the performance cost of that be very similar to the cost of
the equivalent image-processing version (e.g. bicubic)?

Matteo has already said he doesn't want the performance hit of the usual
image scaling algorithms. Of course, if a linear interpolation is fine,
I'm not exactly sure what the whole point of the question is, since he
seems to know how to do that already. But then, I think it's been clear
for some some time now that I haven't been keeping up in this thread. :)

Pete
 
Peter said:
Carl said:
[...]
For high quality 2D image scaling, bi-quadratic or bi-cubic
interpolation is usually used, so I'd propose that quadratic or
cubic interpolation is probably what you want.

But wouldn't the performance cost of that be very similar to the cost
of the equivalent image-processing version (e.g. bicubic)?

No no - likely orders of magnitude faster (depends on the size of the image,
of course). Quadratic or cubic interpolation is not hard - IIRC, it's only
~10x the cost of linear interpolation.
Matteo has already said he doesn't want the performance hit of the
usual image scaling algorithms. Of course, if a linear interpolation
is fine, I'm not exactly sure what the whole point of the question is,
since he
seems to know how to do that already. But then, I think it's been
clear for some some time now that I haven't been keeping up in this
thread.
:)

He's trying to do something "odd". I'd be interested to hear if his
approach to detecting duplicate images actually works - there are so many
things that could fool it, I would htink the false negative rate would be
quite high.

-cd
 
He's trying to do something "odd". I'd be interested to hear if his
approach to detecting duplicate images actually works - there are so
many things that could fool it, I would htink the false negative rate
would be quite high.

You can test it on CodePlex :-):
http://www.codeplex.com/SimilarImagesFinder

For similitarity at least of 95% you can be sure that images are identical
;-).

I've to apply others algorithms, in particular edge detection and dominant
color
to be sure that images are the same, but just now it works fine.

Matteo Migliore.
 
Matteo Migliore said:
Yes, and i thank Peter tooooo much for his time! :-)

But no, I need to resize a one-dimensional array.

No, you're still using the wrong terminology.

First was rescale, that implies multiplying each element by a scalar.
Now resize, that would be "truncation" or "zero-padding" depending on
whether the new length is shorter or longer.

What you need is "resampling". Google for that.
 
Peter Duniho said:
Carl said:
[...]
For high quality 2D image scaling, bi-quadratic or bi-cubic interpolation
is usually used, so I'd propose that quadratic or cubic interpolation is
probably what you want.

But wouldn't the performance cost of that be very similar to the cost of
the equivalent image-processing version (e.g. bicubic)?

No, transforming a handful of points will be far faster than transforming a
large 2-D image.
 
Ben said:
No, transforming a handful of points will be far faster than transforming a
large 2-D image.

That's not what I'm talking about though. I'm talking about using the
built-in image scaling functionality to transform a 1xM or Mx1 image.

I agree that transforming the whole image first is less efficient. And
I said so in my previous post.

But when the data is restricted in one dimension to a single pixel, it
seems to me that a general-purpose bi-cubic interpolation will, because
the data available to it is restricted, be in the same ballpark as a
plain one-dimensional cubic interpolation.

I don't have enough experience to know the actual answer, but both
people who have told me I'm wrong do not appear to have been paying
close enough attention to my actual suggestion. It would be nice to see
a reply that answers the question while being specific enough to be
clear that the answer takes into account my actual suggestion.

Pete
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Back
Top