COEN 171
Principles of
Programming Languages
Winter 2000
Page 375 ff:
3. Argue in support of the Ada 83 designers’ decision to allow the implementor to choose between implementing in out mode parameters by copy or by reference.
The tradeoff is one of efficiency of passing parameters vs. efficiency of accessing the parameters in the subprogram. Ada leaves this choice to the implementor, rather than forcing a single solution, because different situations may require different approaches. Passing a large array which the subprogram only accesses a few times is faster with reference. Passing any argument that the subprogram accesses a lot is faster with copy.
10. Consider the following program written in
C syntax:
void main () {
int value = 2, list[5]
= {1, 3, 5, 7, 9};
swap (value, list[0]);
swap (list[0],
list[1]);
swap (value,
list[value]);
}
void swap (int a, int b) {
int temp;
temp = a;
a = b;
b = temp;
}
For each of the following parameter-passing methods, what are all of the values of the variables value and list after each of the three calls to swap?
a. Passed
by value
With pass by value, none of the actual arguments are changed, so the variables retain the values they were initialized with.
b. Passed
by reference
With pass by reference, the arguments are changed. After the first call to swap, value == 1 and list[0] == 2. After the second call to swap, list[0] == 3 and list[1] == 2. After the third call, value == 2 and list[1] == 1.
c. Passed
by name
With pass by name, it’s as if the text of the arguments is inserted in the text of the subprogram. For the first two calls to swap, behavior is just like pass by reference. For the third call, swap acts as if it has the body
temp =
value;
value =
list[value];
list[value]
= temp;
and as a result, value
== 2 (what was stored in list[1]) and list[2] == 1. List[1] remains 2.
d. Passed
by value-result
In this case,
value-result has the same effect as reference.
12. Argue against the C design of providing
only function subprograms.
If a language provides only functions, then either programmers must live with the restriction of returning only a single result from any subprogram, or functions must allow side effects, which is generally considered bad. Since having subprograms that can only modify a single value is too restrictive, C’s choice is not good.
Page 409 ff:
5. Show the stack with all activation record instances, including static and dynamic chains, when execution reaches the indicated position in the skeleton program below. Assume Bigsub is a level 1 and the order of subprogram invocation is Bigsub calls A, A calls B, B calls A, A calls C, C calls D.
procedure Bigsub;
procedure C; forward;
procedure A;
procedure B;
end; {B}
end; {A}
procedure C;
procedure D;
*** Here is the point ***
end; {D}
end; {C}
end; {Bigsub}
The answer shows only
static and dynamic links. There would also be space in each AR for parameters,
return address, etc.
7. Although local
variables in Pascal procedures are dynamically allocated at the beginning of
each activation, under what circumstances could the value of a local in a
particular activation retain the value of the previous activation?
Each activation
allocates variables in exactly the same order. Variables are not initialized to
any value unless the program contains an initialization statement for the
variable – they simply have whatever value is stored in the location they are
allocated. If a procedure finishes executing, returns, and is immediately
reinvoked, a variable would be assigned the same stack location it had on the
previous invocation, and would have the last value from that previous
invocation.
Page 433:
9. Suppose someone
designed a stack abstract data type in which the function, top, returned an
access path (or pointer) rather than retuning a copy of the top element. This
is not a true data abstraction. Why? Give an example that illustrates the
problem.
If a pointer to the
top value on the stack is returned, there is nothing to prevent the main
program from then changing the top element of the stack (essentially,
performing a pop followed by a push). Indeed, with a pointer, the main program
can retrieve or change anything stored in the stack, bypassing the operations
provided by the ADT. For example,
s: stack (int);
p: *int;
p = s.top;
*p = 257;
Page 488:
12. Compare the multiple inheritance of C++
with that provided by interfaces in Java.
C++ inheritance is
implementation inheritance. That is, a class inheriting from two of more
superclasses actually inherits the code from those classes. Java’s interface
mechanism is an interface inheritance. That is, a class implementing two or
more interfaces simply inherits (and must provide its own implementations for)
the methods of the interface.
Page 530 ff:
5. Busy waiting is a
method whereby a task waits for a given event by continuously checking for that
event to occur. What is the main problem with this approach?
The main problem is
that the task is burning CPU cycles uselessly while waiting for the event to
occur. If the task were suspended until the event occurred, those cycles could
be used by another task.
9. Suppose two tasks A and
B must use the shared variable Buf_Size. Task A adds 2 to Buf_Size and task B
subtracts 1 from it. Assume that such arithmetic operations are done by the
three-step process of fetching the current value, performing the arithmetic,
and putting the new value back. In the absence of competition synchroniation,
what sequences of events are possible and what values result from these
operations? Assume the initial value in Buf_Size is 6.
The idea here is that the add and subtract operations are not atomic, and could be interrupted in mid-operation, when the other task could then run. If A runs to completion, then B runs to completion, Buf_Size has the value 7 (6 + 2 – 1). Similarly if B runs to completion then A runs to completion. If A or B get interrupted in the middle of adding or subtracting, then whichever task finishes last will determine the value in Buf_Size. If A runs but is interrupted after it fetches Buf_Size but before it stores the modified value (allowing B to fetch Buf_Size), or if B runs first and is interrupted after the fetch but before the store, allowing A to fetch Buf_Size, then if A finishes last Buf_Size will have value 8, and if B finishes last Buf_Size will have value 5.
Page 563 ff:
6. In languages without
exception-handling facilities, it is common to have most subprograms include an
“eror” parameter, which can be set to some value representing “OK” or some
other value representing “error in procedure.” What advantage does a linguistic
exception-handling facility like that of Ada have over this method?
If a parameter is
passed, the error status code needs to be checked after each call to a
subprogram, obscuring the program logic. Also, the error-handling code needs to
be located everywhere that check takes place, further cluttering the program
(the code could be placed in one subprogram, or branched to using GOTOs, but
neither of these is a clean solution). The exception-handling mechanisms
preclude the need for constant checks and also provide a smooth way to share
exception-handling code (by propagation).
15. Consider the following
Java skeletal program. In each of the throw statements, where is the exception
handled? Note that fun1 is called from fun2 in class small.
Class Big {
int i;
float f;
void fun1 () throws
{int} {
try {
throw i;
throw f;
}
catch (float) {…}
}
class Small {
int j;
float g;
void fun2 () throws
{float} {
try {
try {
Big.fun1 ();
throw j;
throw g
}
catch (int) {…}
}
catch (float) {…}
}
Throw I is caught by the catch (int) if the inner try of fun2. Throw f is caught by the catch (float) of fun1. Throw j is caught by catch (int) of the inner try of fun2. Throw g is caught by the catch (float) of the outer try of fun2.
Page 639:
7. Write a Prolog program that returns the
last element of a list.
last ( Item, [ Item ] ). % matches a list
with a single element, and returns that element
last (Item, [ Head | Rest ] ) :-
last ( Item, Rest). % if list
has more than one element, find last of tail