Discussion:
How to handle Machine storage exhausted
(too old to reply)
Lizette Koehler
2017-07-14 14:39:09 UTC
Permalink
Raw Message
List -

I have been asked to look at this rexx to see how to handle the condition

Machine storage exhausted

Better.

Currently the job just dies and it takes a while to determine what is going on. Is there a way within a rexx (Signal perhaps) that would capture this event and allow me to jump to some code to exit nicely?

The process continues to run and eventually fail. I would like to include some error handling for this to allow it to die nicer.

I just recently bumped up the REGION size to 24M, I am now looking at bumping it to 128M.

Thanks

Lizette

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Bob Bridges
2017-07-14 15:06:00 UTC
Permalink
Raw Message
I'm missing something, Lizette. When my REXXes use up the available RAM (as for example when I try to EXECIO * DISKR a file that's too big to fit), I get a lot of messages from the OS. Are you saying that this REXX traps that output and displays just:

Machine store exhausted

...and then two lines later:

Better.

...? Because a) that's really weird (why would it say "Better."?), and anyway b) if you're trying to figure out what's causing this you should be able to find those error message somewhere in the REXX code and change that behavior to whatever would suit you better.

---
Bob Bridges
***@gmail.com, cell 336 382-7313
***@InfoSecInc.com

/* Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but rather we have those because we have acted rightly. We are what we repeatedly do. Excellence, then, is not an act but a habit. -Aristotle */


-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Lizette Koehler
Sent: Friday, July 14, 2017 10:40

List -

I have been asked to look at this rexx to see how to handle the condition

Machine storage exhausted

Better.

Currently the job just dies and it takes a while to determine what is going on. Is there a way within a rexx (Signal perhaps) that would capture this event and allow me to jump to some code to exit nicely?

The process continues to run and eventually fail. I would like to include some error handling for this to allow it to die nicer.

I just recently bumped up the REGION size to 24M, I am now looking at bumping it to 128M.

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Hobart Spitz
2017-07-14 15:24:15 UTC
Permalink
Raw Message
Presumably your file is too large for the free space.

Not knowing what you are trying to do, I can't be specific.

I see these alternative options:
1 - Read a bunch of records at a time: "EXECIO 100 ...", and process each
batch, before reading more.
2 - Use LINEIN ().
3 - Add PROCEDURE to any routine that produces a large volume of
intermediate results.
4 - Make sure you don't have infinite recursion, or an infinitrly growing
string.
5 - Use the PIPE command if you have it.
6 - Use SIGNAL OFF ERROR, check the EXECIO return cide, and, when storage
runs out, DROP any stems you don't need and retry.


On Jul 14, 2017 11:11 AM, "Bob Bridges" <***@gmail.com> wrote:

I'm missing something, Lizette. When my REXXes use up the available RAM
(as for example when I try to EXECIO * DISKR a file that's too big to fit),
I get a lot of messages from the OS. Are you saying that this REXX traps
that output and displays just:

Machine store exhausted

...and then two lines later:

Better.

...? Because a) that's really weird (why would it say "Better."?), and
anyway b) if you're trying to figure out what's causing this you should be
able to find those error message somewhere in the REXX code and change that
behavior to whatever would suit you better.

---
Bob Bridges
***@gmail.com, cell 336 382-7313
***@InfoSecInc.com

/* Excellence is an art won by training and habituation. We do not act
rightly because we have virtue or excellence, but rather we have those
because we have acted rightly. We are what we repeatedly do. Excellence,
then, is not an act but a habit. -Aristotle */


-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of
Lizette Koehler
Sent: Friday, July 14, 2017 10:40

List -

I have been asked to look at this rexx to see how to handle the condition

Machine storage exhausted

Better.

Currently the job just dies and it takes a while to determine what is going
on. Is there a way within a rexx (Signal perhaps) that would capture this
event and allow me to jump to some code to exit nicely?

The process continues to run and eventually fail. I would like to include
some error handling for this to allow it to die nicer.

I just recently bumped up the REGION size to 24M, I am now looking at
bumping it to 128M.

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Lizette Koehler
2017-07-14 15:35:51 UTC
Permalink
Raw Message
Basically

Read a bunch of PDS Source datasets (proclibs, cntlibs, etc) create a repository entry of who did what/when. Upload to a DB2 table for historical analysis.

Job can produce (depending on source changes going on) anywhere from 1 line to 10s of thousands of lines of material to sift through.

In the dawn of time the code was sufficient. Now, I think the techniques used are starting to show their age. A rewrite is probably in order, but there is no one who supports the code. I get it because I am a little REXX-y. Probably a proper programming language would be better at this stage.


Henece the process to change region size rather than attempt to update the code. Old Philosophy, why should there be comments when I know what I am doing



Lizette



-----Original Message-----
>From: Hobart Spitz <***@GMAIL.COM>
>Sent: Jul 14, 2017 8:25 AM
>To: TSO-***@VM.MARIST.EDU
>Subject: Re: [TSO-REXX] How to handle Machine storage exhausted
>
>Presumably your file is too large for the free space.
>
>Not knowing what you are trying to do, I can't be specific.
>
>I see these alternative options:
>1 - Read a bunch of records at a time: "EXECIO 100 ...", and process each
>batch, before reading more.
>2 - Use LINEIN ().
>3 - Add PROCEDURE to any routine that produces a large volume of
>intermediate results.
>4 - Make sure you don't have infinite recursion, or an infinitrly growing
>string.
>5 - Use the PIPE command if you have it.
>6 - Use SIGNAL OFF ERROR, check the EXECIO return cide, and, when storage
>runs out, DROP any stems you don't need and retry.
>
>
>On Jul 14, 2017 11:11 AM, "Bob Bridges" <***@gmail.com> wrote:
>
>I'm missing something, Lizette. When my REXXes use up the available RAM
>(as for example when I try to EXECIO * DISKR a file that's too big to fit),
>I get a lot of messages from the OS. Are you saying that this REXX traps
>that output and displays just:
>
> Machine store exhausted
>
>...and then two lines later:
>
> Better.
>
>...? Because a) that's really weird (why would it say "Better."?), and
>anyway b) if you're trying to figure out what's causing this you should be
>able to find those error message somewhere in the REXX code and change that
>behavior to whatever would suit you better.
>
>---
>Bob Bridges
> ***@gmail.com, cell 336 382-7313
> ***@InfoSecInc.com
>
>/* Excellence is an art won by training and habituation. We do not act
>rightly because we have virtue or excellence, but rather we have those
>because we have acted rightly. We are what we repeatedly do. Excellence,
>then, is not an act but a habit. -Aristotle */
>
>
>-----Original Message-----
>From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of
>Lizette Koehler
>Sent: Friday, July 14, 2017 10:40
>
>List -
>
>I have been asked to look at this rexx to see how to handle the condition
>
>Machine storage exhausted
>
>Better.
>
>Currently the job just dies and it takes a while to determine what is going
>on. Is there a way within a rexx (Signal perhaps) that would capture this
>event and allow me to jump to some code to exit nicely?
>
>The process continues to run and eventually fail. I would like to include
>some error handling for this to allow it to die nicer.
>
>I just recently bumped up the REGION size to 24M, I am now looking at
>bumping it to 128M.
>

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
richard rozentals
2017-07-14 15:49:09 UTC
Permalink
Raw Message
My guess is that you are reading all the 10s of thousands of lines to a stem. Change your logic to read the lines to the queue. The queue does not use memory to store the lines it uses the JES spool. When you read a file in a stem it is stored in memory. By the way using the queue you can read 100s of thousands of lines.
The other option is use the drop command to clean up variables(never tested this).
Richard

From: Lizette Koehler <***@MINDSPRING.COM>
To: TSO-***@VM.MARIST.EDU
Sent: Friday, July 14, 2017 11:37 AM
Subject: Re: [TSO-REXX] How to handle Machine storage exhausted

Basically

Read a bunch of PDS Source datasets (proclibs, cntlibs, etc) create a repository entry of who did what/when.  Upload to a DB2 table for historical analysis.

Job can produce (depending on source changes going on) anywhere from 1 line to 10s of thousands of lines of material to sift through.

In the dawn of time the code was sufficient.  Now, I think the techniques used are starting to show their age.  A rewrite is probably in order, but there is no one who supports the code. I get it because I am a little REXX-y.  Probably a proper programming language would be better at this stage.


Henece the process to change region size rather than attempt to update the code.  Old Philosophy, why should there be comments when I know what I am doing



Lizette



-----Original Message-----
>From: Hobart Spitz <***@GMAIL.COM>
>Sent: Jul 14, 2017 8:25 AM
>To: TSO-***@VM.MARIST.EDU
>Subject: Re: [TSO-REXX] How to handle Machine storage exhausted
>
>Presumably your file is too large for the free space.
>
>Not knowing what you are trying to do, I can't be specific.
>
>I see these alternative options:
>1 - Read a bunch of records at a time:  "EXECIO 100 ...", and process each
>batch, before reading more.
>2 - Use LINEIN ().
>3 - Add PROCEDURE to any routine that produces a large volume of
>intermediate results.
>4 - Make sure you don't have infinite recursion, or an infinitrly growing
>string.
>5 - Use the PIPE command if you have it.
>6 - Use SIGNAL OFF ERROR, check the EXECIO return cide, and, when storage
>runs out, DROP any stems you don't need and retry.
>
>
>On Jul 14, 2017 11:11 AM, "Bob Bridges" <***@gmail.com> wrote:
>
>I'm missing something, Lizette.  When my REXXes use up the available RAM
>(as for example when I try to EXECIO * DISKR a file that's too big to fit),
>I get a lot of messages from the OS.  Are you saying that this REXX traps
>that output and displays just:
>
>  Machine store exhausted
>
>...and then two lines later:
>
>  Better.
>
>...?  Because a) that's really weird (why would it say "Better."?), and
>anyway b) if you're trying to figure out what's causing this you should be
>able to find those error message somewhere in the REXX code and change that
>behavior to whatever would suit you better.
>
>---
>Bob Bridges
>  ***@gmail.com, cell 336 382-7313
>  ***@InfoSecInc.com
>
>/* Excellence is an art won by training and habituation.  We do not act
>rightly because we have virtue or excellence, but rather we have those
>because we have acted rightly.  We are what we repeatedly do.  Excellence,
>then, is not an act but a habit.  -Aristotle */
>
>
>-----Original Message-----
>From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of
>Lizette Koehler
>Sent: Friday, July 14, 2017 10:40
>
>List -
>
>I have been asked to look at this rexx to see how to handle the condition
>
>Machine storage exhausted
>
>Better.
>
>Currently the job just dies and it takes a while to determine what is going
>on.  Is there a way within a rexx (Signal perhaps) that would capture this
>event and allow me to jump to some code to exit nicely?
>
>The process continues to run and eventually fail.  I would like to include
>some error handling for this to allow it to die nicer.
>
>I just recently bumped up the REGION size to 24M, I am now looking at
>bumping it to 128M.
>

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX




----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Bob Bridges
2017-07-18 20:37:39 UTC
Permalink
Raw Message
I meant to add a comment and then got busy for a few days - apologies for the delay. Your original question, Lizette, was whether there's a way for a REXX program to detect an out-of-storage condition and trap it so the program can end in a less uncontroled way. Like programming geeks everywhere, none of us actually answered the question, as far as I noticed; we just spouting out other solutions. I did, at least.

I don't know the answer to your question now any more than I did last week, but now that you've said it's for loading into DB2 I do have another idea, although it involves a little extra rewriting. Like this:

1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
2) Connect with QMF
3) In QMF, run a procedure that loads the raw data into DB2

Getting your REXX to talk to QMF is extra work, but no more difficult than getting it to talk to DB2. The advantage is that procs running in QMF upload and download large datasets ~much~ faster than REXX can do it one record at a time with DB2 directly. I wrote some routines when I was at Discover in Chicago; the query came up with thousands of records, which at REXX's rate of input (at least back then) would have been impossible, say half an hour at a guess. But a QMF proc could load the query results into a dataset in just a few seconds, and the REXX could parse that raw data easily...or "quickly", rather, because the REXX had to know where to find the special hex characters that delimited the columns in each record. Still, well worth the effort.

This solves your RAM problem, you see, because you're writing out the data one record at a time, and yet you don't have to wait forever to load the rows into DB2 the slow way.

---
Bob Bridges
***@gmail.com, cell 336 382-7313
***@InfoSecInc.com

/* Your patient must demand that all his own utterances be taken at their face value and judged simply on the actual words, while at the same time judging all his mother's utterances with the fullest and most oversensitive interpretation of the tone and the context and the suspected intention....You know the kind of thing: "I simply ask her what time dinner will be and she flies into a temper". Once this habit is well established you have the delightful situation of a human saying things with the express purpose of offending and yet having a grievance when offence is taken. -advice to a tempter, from The Screwtape Letters by C S Lewis */

-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Lizette Koehler
Sent: Friday, July 14, 2017 11:37

Basically

Read a bunch of PDS Source datasets (proclibs, cntlibs, etc) create a repository entry of who did what/when. Upload to a DB2 table for historical analysis.

Job can produce (depending on source changes going on) anywhere from 1 line to 10s of thousands of lines of material to sift through.

In the dawn of time the code was sufficient. Now, I think the techniques used are starting to show their age. A rewrite is probably in order, but there is no one who supports the code. I get it because I am a little REXX-y. Probably a proper programming language would be better at this stage.


Henece the process to change region size rather than attempt to update the code. Old Philosophy, why should there be comments when I know what I am doing



Lizette



-----Original Message-----
>From: Hobart Spitz <***@GMAIL.COM>
>Sent: Jul 14, 2017 8:25 AM
>To: TSO-***@VM.MARIST.EDU
>Subject: Re: [TSO-REXX] How to handle Machine storage exhausted
>
>Presumably your file is too large for the free space.
>
>Not knowing what you are trying to do, I can't be specific.
>
>I see these alternative options:
>1 - Read a bunch of records at a time: "EXECIO 100 ...", and process
>each batch, before reading more.
>2 - Use LINEIN ().
>3 - Add PROCEDURE to any routine that produces a large volume of
>intermediate results.
>4 - Make sure you don't have infinite recursion, or an infinitrly
>growing string.
>5 - Use the PIPE command if you have it.
>6 - Use SIGNAL OFF ERROR, check the EXECIO return cide, and, when
>storage runs out, DROP any stems you don't need and retry.
>
>
>On Jul 14, 2017 11:11 AM, "Bob Bridges" <***@gmail.com> wrote:
>
>I'm missing something, Lizette. When my REXXes use up the available
>RAM (as for example when I try to EXECIO * DISKR a file that's too big
>to fit), I get a lot of messages from the OS. Are you saying that this
>REXX traps that output and displays just:
>
> Machine store exhausted
>
>...and then two lines later:
>
> Better.
>
>...? Because a) that's really weird (why would it say "Better."?), and
>anyway b) if you're trying to figure out what's causing this you should
>be able to find those error message somewhere in the REXX code and
>change that behavior to whatever would suit you better.
>
>---
>Bob Bridges
> ***@gmail.com, cell 336 382-7313
> ***@InfoSecInc.com
>
>/* Excellence is an art won by training and habituation. We do not act
>rightly because we have virtue or excellence, but rather we have those
>because we have acted rightly. We are what we repeatedly do.
>Excellence, then, is not an act but a habit. -Aristotle */
>
>
>-----Original Message-----
>From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On
>Behalf Of Lizette Koehler
>Sent: Friday, July 14, 2017 10:40
>
>List -
>
>I have been asked to look at this rexx to see how to handle the
>condition
>
>Machine storage exhausted
>
>Better.
>
>Currently the job just dies and it takes a while to determine what is
>going on. Is there a way within a rexx (Signal perhaps) that would
>capture this event and allow me to jump to some code to exit nicely?
>
>The process continues to run and eventually fail. I would like to
>include some error handling for this to allow it to die nicer.
>
>I just recently bumped up the REGION size to 24M, I am now looking at
>bumping it to 128M.
>

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions, send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Jeremy Nicoll
2017-07-19 10:16:09 UTC
Permalink
Raw Message
On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:

> I don't know the answer to your question now any more than I did last
> week, but now that you've said it's for loading into DB2 I do have
> another idea, although it involves a little extra rewriting. Like this:
>
> 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> 2) Connect with QMF
> 3) In QMF, run a procedure that loads the raw data into DB2

If that works, surely it'd be better to write ten or a hundred or a
thousand
records at a time, and then whatever's left in the final chunk? Then
some
experiment (based on record length & number of records) would tell you
what a good compromise between far too many chunks and not enough
would be?

--
Jeremy Nicoll - my opinions are my own.

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Bob Bridges
2017-07-20 19:09:31 UTC
Permalink
Raw Message
I'm sure you're right. But the coding for <n> records at a time is more complex, requiring an extra layer; if you do just one record at a time, it's simpler. That is, for output; for input the coding difference is smaller.

I'm assuming that doing it one record at a time is ~slightly~ less efficient than doing it in batches as you suggest, but only slightly. There are two other possibilities, and I just don't know (and haven't yet tested) which of the three is correct:

1) Maybe it makes no difference whatsoever; the OS puts the records in RAM and sends them to DASD when a full block has been written, and trying to batch them up in the program changes the timing either not at all or so little as to be unmeasurable. That'd be nice, wouldn't it? :)

2) Or maybe I'm all off and it makes a huge difference, for some reason. But about a decade ago, I vaguely recall that I ~did~ time that a little, and decided this isn't true. Mind you, I didn't really time it; I ran the program two different ways and felt that they ran in about the same amount of time. Probably I was going to stick a timer in the program, but got distracted by some other priority and never came back to it, but I don't remember.

Now that you've brought it up, maybe I'll finally get around to trying it out.

---
Bob Bridges
***@gmail.com, cell 336 382-7313
***@InfoSecInc.com

/* It does not matter how slowly you go so long as you do not stop. -Confucius */


-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Jeremy Nicoll
Sent: Wednesday, July 19, 2017 06:17

If that works, surely it'd be better to write ten or a hundred or a thousand records at a time, and then whatever's left in the final chunk? Then some experiment (based on record length & number of records) would tell you what a good compromise between far too many chunks and not enough would be?

--- On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:
> I don't know the answer to your question now any more than I did last
> week, but now that you've said it's for loading into DB2 I do have
> another idea, although it involves a little extra rewriting. Like this:
>
> 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> 2) Connect with QMF
> 3) In QMF, run a procedure that loads the raw data into DB2

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Hobart Spitz
2017-07-20 19:19:22 UTC
Permalink
Raw Message
I have not tested it myself, but it's my understanding the EXECIO is
horribly slow. Anyone who cares about performance, will avoid "EXECIO 1
...", and process records in bunches. It may be due to the overhead of the
TSO host command interface; I know that ISPF uses it's own CLIST
interpreter, probably for that reason. It's too bad that ISPF doesn't do
something similar for REXX.

LINEIN() should be available everywhere, and does not suffer from this
performance issue. It does not have to go thru the TSO host command
interface. Many of the TSO routines (IKJVC441, variable access, is
another), are old and slow and have not been updated (or cannot be for
compatibility reasons) since the original days of TSO.

On Thu, Jul 20, 2017 at 3:10 PM, Bob Bridges <***@gmail.com> wrote:

> I'm sure you're right. But the coding for <n> records at a time is more
> complex, requiring an extra layer; if you do just one record at a time,
> it's simpler. That is, for output; for input the coding difference is
> smaller.
>
> I'm assuming that doing it one record at a time is ~slightly~ less
> efficient than doing it in batches as you suggest, but only slightly.
> There are two other possibilities, and I just don't know (and haven't yet
> tested) which of the three is correct:
>
> 1) Maybe it makes no difference whatsoever; the OS puts the records in RAM
> and sends them to DASD when a full block has been written, and trying to
> batch them up in the program changes the timing either not at all or so
> little as to be unmeasurable. That'd be nice, wouldn't it? :)
>
> 2) Or maybe I'm all off and it makes a huge difference, for some reason.
> But about a decade ago, I vaguely recall that I ~did~ time that a little,
> and decided this isn't true. Mind you, I didn't really time it; I ran the
> program two different ways and felt that they ran in about the same amount
> of time. Probably I was going to stick a timer in the program, but got
> distracted by some other priority and never came back to it, but I don't
> remember.
>
> Now that you've brought it up, maybe I'll finally get around to trying it
> out.
>
> ---
> Bob Bridges
> ***@gmail.com, cell 336 382-7313
> ***@InfoSecInc.com
>
> /* It does not matter how slowly you go so long as you do not stop.
> -Confucius */
>
>
> -----Original Message-----
> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf
> Of Jeremy Nicoll
> Sent: Wednesday, July 19, 2017 06:17
>
> If that works, surely it'd be better to write ten or a hundred or a
> thousand records at a time, and then whatever's left in the final chunk?
> Then some experiment (based on record length & number of records) would
> tell you what a good compromise between far too many chunks and not enough
> would be?
>
> --- On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:
> > I don't know the answer to your question now any more than I did last
> > week, but now that you've said it's for loading into DB2 I do have
> > another idea, although it involves a little extra rewriting. Like this:
> >
> > 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> > 2) Connect with QMF
> > 3) In QMF, run a procedure that loads the raw data into DB2
>
> ----------------------------------------------------------------------
> For TSO-REXX subscribe / signoff / archive access instructions,
> send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
>



--
OREXXMan

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
ITschak Mugzach
2017-07-20 19:48:01 UTC
Permalink
Raw Message
One way to deal with memory shortage is to use lean code: drop (Drop xxx.i)
any record you processed. The problem with large files is that you don't
know how large they are. If this is the case, you can COUNT the records
using IDCAMS (Or a CBT count program) and decide if a single or multiple
EXECIO calls should be used. It spends time, but ensures that the program
will do the job.

ITschak

On Thu, Jul 20, 2017 at 10:10 PM, Bob Bridges <***@gmail.com> wrote:

> I'm sure you're right. But the coding for <n> records at a time is more
> complex, requiring an extra layer; if you do just one record at a time,
> it's simpler. That is, for output; for input the coding difference is
> smaller.
>
> I'm assuming that doing it one record at a time is ~slightly~ less
> efficient than doing it in batches as you suggest, but only slightly.
> There are two other possibilities, and I just don't know (and haven't yet
> tested) which of the three is correct:
>
> 1) Maybe it makes no difference whatsoever; the OS puts the records in RAM
> and sends them to DASD when a full block has been written, and trying to
> batch them up in the program changes the timing either not at all or so
> little as to be unmeasurable. That'd be nice, wouldn't it? :)
>
> 2) Or maybe I'm all off and it makes a huge difference, for some reason.
> But about a decade ago, I vaguely recall that I ~did~ time that a little,
> and decided this isn't true. Mind you, I didn't really time it; I ran the
> program two different ways and felt that they ran in about the same amount
> of time. Probably I was going to stick a timer in the program, but got
> distracted by some other priority and never came back to it, but I don't
> remember.
>
> Now that you've brought it up, maybe I'll finally get around to trying it
> out.
>
> ---
> Bob Bridges
> ***@gmail.com, cell 336 382-7313
> ***@InfoSecInc.com
>
> /* It does not matter how slowly you go so long as you do not stop.
> -Confucius */
>
>
> -----Original Message-----
> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf
> Of Jeremy Nicoll
> Sent: Wednesday, July 19, 2017 06:17
>
> If that works, surely it'd be better to write ten or a hundred or a
> thousand records at a time, and then whatever's left in the final chunk?
> Then some experiment (based on record length & number of records) would
> tell you what a good compromise between far too many chunks and not enough
> would be?
>
> --- On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:
> > I don't know the answer to your question now any more than I did last
> > week, but now that you've said it's for loading into DB2 I do have
> > another idea, although it involves a little extra rewriting. Like this:
> >
> > 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> > 2) Connect with QMF
> > 3) In QMF, run a procedure that loads the raw data into DB2
>
> ----------------------------------------------------------------------
> For TSO-REXX subscribe / signoff / archive access instructions,
> send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
>



--
ITschak Mugzach
*|** IronSphere Platform* *|** Automatic ISCM** (Information Security
Contiguous Monitoring) **| *

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Paul Gilmartin
2017-07-20 20:41:55 UTC
Permalink
Raw Message
On 2017-07-20, at 13:47, ITschak Mugzach wrote:

> One way to deal with memory shortage is to use lean code: drop (Drop xxx.i)
> any record you processed. The problem with large files is that you don't
> know how large they are. If this is the case, you can COUNT the records
> using IDCAMS (Or a CBT count program) and decide if a single or multiple
> EXECIO calls should be used. It spends time, but ensures that the program
> will do the job.
>
I understand that DROPping a single member of a compound symbol reclaims
no storage; storage is reclaimed only when the entire compound is DROPped.

I would hope that leaving the scope or a PROCEDURE would reclaim compounds
that are not EXPOSEd.

I have a case that demonstrates DROPping single element can actually
*increase* storage use.


On 2017-07-20, at 13:20, Hobart Spitz wrote:
>
> LINEIN() should be available everywhere, and does not suffer from this
> performance issue. It does not have to go thru the TSO host command
> interface.
>
Is LINEIN() available for Classic data sets, or only for UNIX paths?

Might ADDRESS MVS EXECIO perform better than ADDRESS TSO EXECIO by
avoiding the TSO host command interface?

-- gil

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Jeremy Nicoll
2017-07-21 09:04:50 UTC
Permalink
Raw Message
On Thu, 20 Jul 2017, at 20:47, ITschak Mugzach wrote:
> The problem with large files is that you don't
> know how large they are. If this is the case, you can COUNT the records
> using IDCAMS (Or a CBT count program) and decide if a single or multiple
> EXECIO calls should be used. It spends time, but ensures that the program
> will do the job.

LISTDSI will tell you lots about the allocated space that a dataset
uses, which
might - if it's a sequential file anyway - allow a quick judgement to be
made.

--
Jeremy Nicoll - my opinions are my own.

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Thom Stone
2017-07-21 22:11:20 UTC
Permalink
Raw Message
I've been retired four years now, but I do remember there was system package available that made LINEIN etc. available to TSO rexx. My SMP programmer made it available to me and we tested/played with it for a bit, but I no longer remember what became of it. I was fortunate that I always had enough memory to use EXECIO to hold all my records. There was so much; AREXX, System rexx, Netview rexx, TSO rexx, CMS rexx, USS Rexx, linux, unix, windows, etc.
Thanks,
Thom

-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Hobart Spitz
Sent: Thursday, July 20, 2017 2:20 PM
To: TSO-***@VM.MARIST.EDU
Subject: Re: [TSO-REXX] How to handle Machine storage exhausted

I have not tested it myself, but it's my understanding the EXECIO is horribly slow. Anyone who cares about performance, will avoid "EXECIO 1 ...", and process records in bunches. It may be due to the overhead of the TSO host command interface; I know that ISPF uses it's own CLIST interpreter, probably for that reason. It's too bad that ISPF doesn't do something similar for REXX.

LINEIN() should be available everywhere, and does not suffer from this performance issue. It does not have to go thru the TSO host command interface. Many of the TSO routines (IKJVC441, variable access, is another), are old and slow and have not been updated (or cannot be for compatibility reasons) since the original days of TSO.

On Thu, Jul 20, 2017 at 3:10 PM, Bob Bridges <***@gmail.com> wrote:

> I'm sure you're right. But the coding for <n> records at a time is
> more complex, requiring an extra layer; if you do just one record at a
> time, it's simpler. That is, for output; for input the coding
> difference is smaller.
>
> I'm assuming that doing it one record at a time is ~slightly~ less
> efficient than doing it in batches as you suggest, but only slightly.
> There are two other possibilities, and I just don't know (and haven't
> yet
> tested) which of the three is correct:
>
> 1) Maybe it makes no difference whatsoever; the OS puts the records in
> RAM and sends them to DASD when a full block has been written, and
> trying to batch them up in the program changes the timing either not
> at all or so little as to be unmeasurable. That'd be nice, wouldn't
> it? :)
>
> 2) Or maybe I'm all off and it makes a huge difference, for some reason.
> But about a decade ago, I vaguely recall that I ~did~ time that a
> little, and decided this isn't true. Mind you, I didn't really time
> it; I ran the program two different ways and felt that they ran in
> about the same amount of time. Probably I was going to stick a timer
> in the program, but got distracted by some other priority and never
> came back to it, but I don't remember.
>
> Now that you've brought it up, maybe I'll finally get around to trying
> it out.
>
> ---
> Bob Bridges
> ***@gmail.com, cell 336 382-7313
> ***@InfoSecInc.com
>
> /* It does not matter how slowly you go so long as you do not stop.
> -Confucius */
>
>
> -----Original Message-----
> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On
> Behalf Of Jeremy Nicoll
> Sent: Wednesday, July 19, 2017 06:17
>
> If that works, surely it'd be better to write ten or a hundred or a
> thousand records at a time, and then whatever's left in the final chunk?
> Then some experiment (based on record length & number of records)
> would tell you what a good compromise between far too many chunks and
> not enough would be?
>
> --- On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:
> > I don't know the answer to your question now any more than I did
> > last week, but now that you've said it's for loading into DB2 I do
> > have another idea, although it involves a little extra rewriting. Like this:
> >
> > 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> > 2) Connect with QMF
> > 3) In QMF, run a procedure that loads the raw data into DB2
>
> ----------------------------------------------------------------------
> For TSO-REXX subscribe / signoff / archive access instructions, send
> email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
>



--
OREXXMan

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions, send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Paul Gilmartin
2017-07-22 00:09:32 UTC
Permalink
Raw Message
On 2017-07-21, at 16:12, Thom Stone wrote:

> I've been retired four years now, but I do remember there was system package available that made LINEIN etc. available to TSO rexx. My SMP programmer made it available to me and we tested/played with it for a bit, but I no longer remember what became of it.
>
Two ways. It is/was a library function for compiled Rexx. Your SMP
programmer may have had a way to liberate it. It supported only Classic
data sets, not UNIX files.

There is now a Rexx function package which implements most of Stream I/O
(but not SIGNAL ON NOTREADY, which requires more than a function package.)
That one supports only UNIX files, not Classic data sets.

And stream I/O is now intrinsic in Rexx for VM/CMS.

<SIGH> Conway's Law. </SIGH>

> ... I was fortunate that I always had enough memory to use EXECIO to hold all my records. There was so much; AREXX, System rexx, Netview rexx, TSO rexx, CMS rexx, USS Rexx, linux, unix, windows, etc.
>
Regina, Rexx IMC, ...

-- gil

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Jesse 1 Robinson
2017-07-20 20:59:22 UTC
Permalink
Raw Message
IIRC EXECIO for one record at a time was introduced for TSO/E. My VM colleagues had never heard of it. My *impression* was that its purpose was to simplify converting CLIST to REXX. Since CLIST could only read one record at a time, CLIST logic flow was built around that capability.

EXECIO for a whole file in one shot is super-fast. Of course there are memory limitations, but these days virtual is way bigger than it was back then.

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
***@sce.com


-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Hobart Spitz
Sent: Thursday, July 20, 2017 12:20 PM
To: TSO-***@VM.MARIST.EDU
Subject: (External):Re: [TSO-REXX] How to handle Machine storage exhausted

I have not tested it myself, but it's my understanding the EXECIO is horribly slow. Anyone who cares about performance, will avoid "EXECIO 1 ...", and process records in bunches. It may be due to the overhead of the TSO host command interface; I know that ISPF uses it's own CLIST interpreter, probably for that reason. It's too bad that ISPF doesn't do something similar for REXX.

LINEIN() should be available everywhere, and does not suffer from this performance issue. It does not have to go thru the TSO host command interface. Many of the TSO routines (IKJVC441, variable access, is another), are old and slow and have not been updated (or cannot be for compatibility reasons) since the original days of TSO.

On Thu, Jul 20, 2017 at 3:10 PM, Bob Bridges <***@gmail.com> wrote:

> I'm sure you're right. But the coding for <n> records at a time is
> more complex, requiring an extra layer; if you do just one record at a
> time, it's simpler. That is, for output; for input the coding
> difference is smaller.
>
> I'm assuming that doing it one record at a time is ~slightly~ less
> efficient than doing it in batches as you suggest, but only slightly.
> There are two other possibilities, and I just don't know (and haven't
> yet
> tested) which of the three is correct:
>
> 1) Maybe it makes no difference whatsoever; the OS puts the records in
> RAM and sends them to DASD when a full block has been written, and
> trying to batch them up in the program changes the timing either not
> at all or so little as to be unmeasurable. That'd be nice, wouldn't
> it? :)
>
> 2) Or maybe I'm all off and it makes a huge difference, for some reason.
> But about a decade ago, I vaguely recall that I ~did~ time that a
> little, and decided this isn't true. Mind you, I didn't really time
> it; I ran the program two different ways and felt that they ran in
> about the same amount of time. Probably I was going to stick a timer
> in the program, but got distracted by some other priority and never
> came back to it, but I don't remember.
>
> Now that you've brought it up, maybe I'll finally get around to trying
> it out.
>
> ---
> Bob Bridges
> ***@gmail.com, cell 336 382-7313
> ***@InfoSecInc.com
>
> /* It does not matter how slowly you go so long as you do not stop.
> -Confucius */
>
>
> -----Original Message-----
> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On
> Behalf Of Jeremy Nicoll
> Sent: Wednesday, July 19, 2017 06:17
>
> If that works, surely it'd be better to write ten or a hundred or a
> thousand records at a time, and then whatever's left in the final chunk?
> Then some experiment (based on record length & number of records)
> would tell you what a good compromise between far too many chunks and
> not enough would be?
>
> --- On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:
> > I don't know the answer to your question now any more than I did
> > last week, but now that you've said it's for loading into DB2 I do
> > have another idea, although it involves a little extra rewriting. Like this:
> >
> > 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> > 2) Connect with QMF
> > 3) In QMF, run a procedure that loads the raw data into DB2
>
> ----------------------------------------------------------------------
> For TSO-REXX subscribe / signoff / archive access instructions, send
> email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
>



--
OREXXMan
Paul Gilmartin
2017-07-20 21:37:31 UTC
Permalink
Raw Message
On 2017-07-20, at 15:00, Jesse 1 Robinson wrote:

> IIRC EXECIO for one record at a time was introduced for TSO/E. My VM colleagues had never heard of it. My *impression* was that its purpose was to simplify converting CLIST to REXX. Since CLIST could only read one record at a time, CLIST logic flow was built around that capability.
>
EXECIO was available on CMS long before it appared in TSO/E 2 (although
TRL speaks of its having been developed simultaneously for CMS and MVS.)

I do not recall the LINES argument's ever enforcing a minimum value; "1"
was always allowed. CMS options included VAR for a simple variable and
STRING for inline data, both lacked by Rexx on z/OS.

In CMS, EXECIO was a CMS command, antedating Rexx.

> EXECIO for a whole file in one shot is super-fast. Of course there are memory limitations, but these days virtual is way bigger than it was back then.
>
processing a "whole file in one shot" is hardly meaningful if that file
represents a non-static stream such as DSN(*).

-- gil

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Paul Gilmartin
2017-07-21 14:35:01 UTC
Permalink
Raw Message
On 2017-07-20, at 15:00, Jesse 1 Robinson wrote:
>
> EXECIO for a whole file in one shot is super-fast. Of course there are memory limitations, but these days virtual is way bigger than it was back then.
>
A very unsatisfactory alternative is ISPF LMPUT with the MULTX option.
The programmer must build a buffer image with embedded binary control
information. This probably has quadratic overhead with Rexx facilities.

Why, why, doesn't ISPF simply support Rexx compound symbols for
this purpose?

-- gil

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Bob Bridges
2017-08-16 20:31:24 UTC
Permalink
Raw Message
Reading these emails belatedly, I was finally motivated to do some timing tests.

I have a handy file for the purpose, only 2700 records but I use it a lot so it was in my mind. I wrote PGM1 to allocate the file, start the timer, process all the input, stop the timer and then free the DD and display the results; this way the overhead affects the time as little as possible. I used three methods:

1) Write <n> records at a time to a stem, and drop each record from the stem one at a time.
2) Write <n> records to a stem, then drop the entire stem.
3) Write <n> records to the stack, then parse pull each record.

I wrote PGM2 to call PGM1 eight times and display the average time. I ran PGM2 for 1 record at a time, 5, 20 and *. Here's what I get:

--Read <n> records at a time: 1 5 20 *
Stem, drop each line individually: 0.087968875 0.075949 0.073918375 0.073024125
Stem, drop stem in batches: 0.087492125 0.074802 0.072333375 0.072098625
Stack, using parse pull: 0.089421375 0.07645475 0.073615125 0.07308325

I conclude:

a) As expected, reading one record at a time is slower than *. But there isn't as much difference as I expected, between 20% and 22% more.

b) As expected, dropping the stem as a group between EXECIO statements is faster than doing it one line at a time, but by a much smaller margin.

c) Contrary to my expectations, the stem is very slightly faster than the stack. Not faster enough to outweigh other considerations.

Notice that although there's 20% difference between reading one record at a time and the whole file, most of that difference occurs between one record and five records at a time. That is, 1 rec/read takes 17% more time than 5 rec/read, but 5 rec/read takes only 3½% more time than 20 rec/read and 20 rec/read only 0.3% more than 2700.

---
Bob Bridges
***@gmail.com, cell 336 382-7313
***@InfoSecInc.com

/* Even the most marginal NBA player is an absurdly better athlete than an ordinary person. When basketball people say that Grant Long can't shoot, can't pass, can't dribble, what they mean is: He can shoot, pass and dribble better than you, better than anybody you know, better than all but a few hundred people in the world. -from _Why the NBA Isn't As Offensive As You Think_ by Dave Barry */

-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Jesse 1 Robinson
Sent: Thursday, July 20, 2017 17:00

IIRC EXECIO for one record at a time was introduced for TSO/E. My VM colleagues had never heard of it. My *impression* was that its purpose was to simplify converting CLIST to REXX. Since CLIST could only read one record at a time, CLIST logic flow was built around that capability.

EXECIO for a whole file in one shot is super-fast. Of course there are memory limitations, but these days virtual is way bigger than it was back then.


-----Original Message-----
From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Hobart Spitz
Sent: Thursday, July 20, 2017 12:20 PM

I have not tested it myself, but it's my understanding the EXECIO is horribly slow. Anyone who cares about performance, will avoid "EXECIO 1 ...", and process records in bunches. It may be due to the overhead of the TSO host command interface; I know that ISPF uses it's own CLIST interpreter, probably for that reason. It's too bad that ISPF doesn't do something similar for REXX.

LINEIN() should be available everywhere, and does not suffer from this performance issue. It does not have to go thru the TSO host command interface. Many of the TSO routines (IKJVC441, variable access, is another), are old and slow and have not been updated (or cannot be for compatibility reasons) since the original days of TSO.

--- On Thu, Jul 20, 2017 at 3:10 PM, Bob Bridges <***@gmail.com> wrote:
> I'm sure you're right. But the coding for <n> records at a time is
> more complex, requiring an extra layer; if you do just one record at a
> time, it's simpler. That is, for output; for input the coding
> difference is smaller.
>
> I'm assuming that doing it one record at a time is ~slightly~ less
> efficient than doing it in batches as you suggest, but only slightly.
> There are two other possibilities, and I just don't know (and haven't
> yet
> tested) which of the three is correct:
>
> 1) Maybe it makes no difference whatsoever; the OS puts the records in
> RAM and sends them to DASD when a full block has been written, and
> trying to batch them up in the program changes the timing either not
> at all or so little as to be unmeasurable. That'd be nice, wouldn't
> it? :)
>
> 2) Or maybe I'm all off and it makes a huge difference, for some reason.
> But about a decade ago, I vaguely recall that I ~did~ time that a
> little, and decided this isn't true. Mind you, I didn't really time
> it; I ran the program two different ways and felt that they ran in
> about the same amount of time. Probably I was going to stick a timer
> in the program, but got distracted by some other priority and never
> came back to it, but I don't remember.
>
> Now that you've brought it up, maybe I'll finally get around to trying
> it out.
>
>
> -----Original Message-----
> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Jeremy Nicoll
> Sent: Wednesday, July 19, 2017 06:17
>
> If that works, surely it'd be better to write ten or a hundred or a
> thousand records at a time, and then whatever's left in the final chunk?
> Then some experiment (based on record length & number of records)
> would tell you what a good compromise between far too many chunks and
> not enough would be?
>
> --- On Tue, 18 Jul 2017, at 21:38, Bob Bridges wrote:
> > I don't know the answer to your question now any more than I did
> > last week, but now that you've said it's for loading into DB2 I do
> > have another idea, although it involves a little extra rewriting. Like this:
> >
> > 1) Write the data one record at a time (EXECIO 1 DISKW) to a dataset
> > 2) Connect with QMF
> > 3) In QMF, run a procedure that loads the raw data into DB2

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Paul Gilmartin
2017-08-16 21:22:07 UTC
Permalink
Raw Message
On 2017-08-16, at 14:32, Bob Bridges wrote:
>
> -----Original Message-----
> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU] On Behalf Of Jesse 1 Robinson
> Sent: Thursday, July 20, 2017 17:00
>
> IIRC EXECIO for one record at a time was introduced for TSO/E. My VM colleagues had never heard of it. My *impression* was that its purpose was to simplify converting CLIST to REXX. Since CLIST could only read one record at a time, CLIST logic flow was built around that capability.
>
As long as I can remember CMS EXECIO, long antedating TSO/E Rexx, had
the LINES argument with 1 (and 0) as valid values and STEM, VAR, and
STRING options. The last, very convenient and never available in TSO,
allowed output data to appear inline in the EXECIO command.

"[O]ne record at a time" is useful long-running processes when input
arrives sporadically and prompt output is desired.

-- gil

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Hamilton, Robert
2017-08-16 21:39:40 UTC
Permalink
Raw Message
IIRC, EXECIO predates REXX; seems it worked with EXEC2 as well, but not before; it requires the extended parameter list.
It always accepted STEMs and such; it's only TSO where people (including in SAMPLIB...hey, IBM.....) seem to think they can only write a record at a time. I recently rewrote a DR script to build a job in stem-array and write it to JES with a single EXECIO call. The next guy took that apart and changed it back to write a line at a time.

<*sigh*>



R;

Rob Hamilton
Sr. System Engineer
Chemical Abstracts Service




>> From: TSO REXX Discussion List [mailto:TSO-***@VM.MARIST.EDU]
>On
>> Behalf Of Jesse 1 Robinson
>> Sent: Thursday, July 20, 2017 17:00
>>
>> IIRC EXECIO for one record at a time was introduced for TSO/E. My VM
>colleagues had never heard of it. My *impression* was that its purpose was
>to simplify converting CLIST to REXX. Since CLIST could only read one
>record at a time, CLIST logic flow was built around that capability.
>>
>As long as I can remember CMS EXECIO, long antedating TSO/E Rexx, had
>the LINES argument with 1 (and 0) as valid values and STEM, VAR, and
>STRING options. The last, very convenient and never available in TSO,
>allowed output data to appear inline in the EXECIO command.
>
>"[O]ne record at a time" is useful long-running processes when input arrives
>sporadically and prompt output is desired.
>
>-- gil
>
>----------------------------------------------------------------------
>For TSO-REXX subscribe / signoff / archive access instructions, send email
>to ***@VM.MARIST.EDU with the message: INFO TSO-REXX

Confidentiality Notice: This electronic message transmission, including any attachment(s), may contain confidential, proprietary, or privileged information from Chemical Abstracts Service ("CAS"), a division of the American Chemical Society ("ACS"). If you have received this transmission in error, be advised that any disclosure, copying, distribution, or use of the contents of this information is strictly prohibited. Please destroy all copies of the message and contact the sender immediately by either replying to this message or calling 614-447-3600.

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Paul Gilmartin
2017-08-17 14:31:12 UTC
Permalink
Raw Message
On 2017-08-16, at 15:40, Hamilton, Robert wrote:

> IIRC, EXECIO predates REXX; seems it worked with EXEC2 as well, but not before; it requires the extended parameter list.
> It always accepted STEMs and such; it's only TSO where people (including in SAMPLIB...hey, IBM.....) seem to think they can only write a record at a time. I recently rewrote a DR script to build a job in stem-array and write it to JES with a single EXECIO call. The next guy took that apart and changed it back to write a line at a time.
>
> <*sigh*>
>
I do that when I want to browse the SYSOUT with SDSF in real time.

-- gil

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Garrett, Robert
2017-08-17 14:50:08 UTC
Permalink
Raw Message
To each his own I guess. For me it depends on the problem at hand that I'm coding for. If the amount of data is both reasonably small and finite, then I reckon it doesn't matter much.

However, if I'm building something that I intend to work with any set of data of unknown and arbitrary size (perhaps unbounded), I'll go record at a time - every time.

To do otherwise reminds me too much of a monster of a Fortran program I built back in college as part of a particle physics research project for my professor at the time. It did that - loaded all the data from a file into a memory array at once and then crunched over it. It was such a pig that the University data center would only allow me to run it during a short time window in the middle of the night because when it was running, almost nothing else could. Looking back on that now, I'm embarrassed that I built it that way but at the time I really didn't know any better. Maybe I'm just olde (well, no maybe about it, I -am- olde) but in general, loading a whole file into memory just seems amateurish to me - YMMV.

Rob

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Bill Turner, WB4ALM
2017-08-17 17:17:01 UTC
Permalink
Raw Message
Depends on the size of the file, how many other people/processes are
also accessing it, and how long do you want to be enqueued on it...
..second consideration is how often updated, and is the file primarily a
read-only file.

/s/ Bill Turner, wb4alm


On 08/17/2017 10:51 AM, Garrett, Robert wrote:
> To each his own I guess. For me it depends on the problem at hand that I'm coding for. If the amount of data is both reasonably small and finite, then I reckon it doesn't matter much.
>
> However, if I'm building something that I intend to work with any set of data of unknown and arbitrary size (perhaps unbounded), I'll go record at a time - every time.
>
> To do otherwise reminds me too much of a monster of a Fortran program I built back in college as part of a particle physics research project for my professor at the time. It did that - loaded all the data from a file into a memory array at once and then crunched over it. It was such a pig that the University data center would only allow me to run it during a short time window in the middle of the night because when it was running, almost nothing else could. Looking back on that now, I'm embarrassed that I built it that way but at the time I really didn't know any better. Maybe I'm just olde (well, no maybe about it, I -am- olde) but in general, loading a whole file into memory just seems amateurish to me - YMMV.
>
> Rob
>
> ----------------------------------------------------------------------
> For TSO-REXX subscribe / signoff / archive access instructions,
> send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
>

----------------------------------------------------------------------
For TSO-REXX subscribe / signoff / archive access instructions,
send email to ***@VM.MARIST.EDU with the message: INFO TSO-REXX
Loading...