[pooma-dev] Parallel File I/O
Arno Candel
candel at itp.phys.ethz.ch
Thu Aug 29 15:29:47 UTC 2002
Many thanks!
I just implemented a serial writer. Can you give me a hint how to
structure a reader which reads a file from NFS into a distributed array?
Thanks in advance,
Arno
Richard Guenther wrote:
>On Wed, 28 Aug 2002, Arno Candel wrote:
>
>
>
>>Hi,
>>
>>Is there a clever way to handle large distributed Array I/O to disk? I
>>don't want all contexts to block each other while reading/writing.
>>
>>A straight-forward reader implementation like
>>
>>Array<3, double, MultiPatch<GridTag,Remote<Brick> > A;
>>A.initialize(Domain, Partition, DistributedTag());
>>
>>for i=A.domain()[0].first() to A.domain()[0].last()
>> for j=A.domain()[1].first() to A.domain()[1].last()
>> for k=A.domain()[2].first() to A.domain()[2].last()
>> {
>> my_ifstream >> value;
>> A(i,j,k) = value;
>> }
>>
>>
>
>You are effectively doing all work n times here ;)
>
>I use something like the following (which does I/O on one node only - the
>only way to work reliably with something like NFS):
>
> for (Layout_t::const_iterator domain = A.layout().beginGlobal(); domain
>!= A.layout().endGlobal(); domain++) {
> Interval<Dim> d = intersect((*domain).domain(), totalDomain);
> // make local copy of remote data
> Array<Dim, TypeofA::Element_t, Remote<Brick> > a;
> a.engine() = Engine<Dim, TypeofA::Element_t, Remote<Brick> >(0, d);
> a = A(d);
> Pooma::blockAndEvaluate();
> // do I/O - on node 0 only
> if (Pooma::context() != 0)
> continue;
> // from here on, use a.engine().localEngine() for all access to a!
> }
>
>An equivalent loop for distributed I/O would loop through the layouts
>local patch list and use the localEngine() of A directly.
>
>Hope this helps, Richard.
>
>--
>Richard Guenther <richard.guenther at uni-tuebingen.de>
>WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
>The GLAME Project: http://www.glame.de/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sourcerytools.com/pipermail/pooma-dev/attachments/20020829/c1502ec6/attachment.html>
More information about the pooma-dev
mailing list