[pooma-dev] Evaluator/ReductionEvaluator.h question
Richard Guenther
rguenth at tat.physik.uni-tuebingen.de
Fri Feb 21 17:36:07 UTC 2003
On Wed, 19 Feb 2003, Richard Guenther wrote:
> On Tue, 18 Feb 2003, Richard Guenther wrote:
>
> > Why is the result of ReductionEvaluator<>::evaluate() initialized
> > to Expr.read(0) and op never applied to it? This seems to be wrong,
> > f.i. if the operation is
> >
> > void op(double &res, double val)
> > {
> > double tmp = std::sqrt(val);
> > if (tmp > res)
> > res = tmp;
> > }
>
> I see now that the current implementation does make sense for all
> reduction operators I can think of, but looking at the evaluation
> loops they seem to be hard to optimize for the compiler, so may I
> propose the following patch?
I expected some critics on the patch, namely the following...
(so I delay committing this).
> +++ edited/src/Array/Reductions.h Tue Feb 18 12:59:28 2003
> @@ -84,7 +84,7 @@
> template<int Dim, class T, class EngineTag>
> T min(const Array<Dim, T, EngineTag> &a)
> {
> - T ret;
> + T ret = std::numeric_limits<T>::max();
What for types that dont have a std::numeric_limits<> specialization? Are
there any that we care? Tiny::Zero<> probably?
> @@ -124,7 +124,7 @@
> template<int Dim, class T, class EngineTag>
> T bitOr(const Array<Dim, T, EngineTag> &a)
> {
> - T ret;
> + T ret = static_cast<T>(0ULL);
Does this work for all types we care? Do we need to use memset() here?
What for FP types - can this initial value be a SNaN or other trapping
stuff we will choke on later?
I'll go on adding two testcases, one for Arrays, one for Fields and use
memset for all bits 1 and all bits 0 initial values. I dont know what to
do or wether to care about the numeric_limits<> issues.
Any ideas, comments?
Richard.
More information about the pooma-dev
mailing list