Talk:Evaluation strategy

Latest comment: 7 months ago by Mathnerd314159 in topic call-by-push-value

Evaluation order

edit

I've removed the following paragraph from the article:

Note that call-by-value does not necessarily mean left-to-right evaluation. While most programming languages that use call-by-value do evaluate function arguments left-to-right, some (such as OCaml) evaluate functions and their arguments right-to-left.

Firstly, it's not clear what purpose this is supposed to serve. I don't see anything in what precedes it that might be taken to imply that evaluation is left-to-right. If this is considered an important point, it could be made in just two or three words by adding to the previous paragraph a statement that the order of evaluation may vary.

Secondly, it's partly unsourced and partly inaccurate. Unsourced: while it seems likely that "most" call-by-value languages evaluate left-to-right, it's not a claim we can make without providing evidence. And inaccurate: OCaml uses call-by-reference, with the exception that immutable int values are encoded as nonaligned pointers and can therefore be passed by value.

Haeleth Talk 14:02, 7 February 2006 (UTC)Reply

The purpose of the paragraph is to inform the reader that call-by-value evaluation is not one particular strategy, but rather a family of strategies.

OCaml uses call-by-value, just like any other ML. If you want to pass a reference into a function, you pass an explicit reference by value, which brings me to another point.

Your removal of C++ bias in the article is appreciated; however, you've introduced some factual inaccuracies – for example, a pointer is not the same as a reference, and while many functional languages represent large heap values as boxed pointers, they are semantically call-by-value becasue the values pointed to do not change. We're not talking about calling conventions, we're talking about evaluation strategies. --bmills 14:49, 7 February 2006 (UTC)Reply

Clem Baker-Finch is my Comp Lecturer! Go Clem! (he's a funny guy) 61.9.204.168 04:12, 5 June 2006 (UTC)Reply

call-by-value-result

edit

The article currently says

Call-by-copy-restore or call-by-value-result is a special case of call-by-reference where the provided reference is unique to the caller.

Um, I'm fairly certain that call-by-value-result does not involve a reference, but instead copies the value in and out (ideally in registers, but compilers using a caller-restores-stack protocol can copy results off the stack). I know the Ada 83 spec had special rules for the interaction of in out parameters and exceptions to allow implementors to use call-by-reference for large types but require use of call-by-value for small types.

Is not a correction needed here? Cheers, CWC(talk) 19:19, 23 August 2006 (UTC)Reply

The OS/360 Fortran compiles pass scalars by value result. Specifically, they pass the address, as usual, but the called routine makes a local copy, and then copies back before return. The address is not special. On the other hand, many Fortran 90 compilers will make a contiguous copy before the call, pass its address, then copy back on return. This is needed, as some arguments are required to be contiguous. I suppose I could imagine some writing the data onto the stack, allowing the callee to change it, and then copying back on return. I don't know any that do that.

call by name in SQL

edit

Jet SQL uses a version of call-by-name/call by need. when used as arguments for a column, functions with no dynamic arguments are memoized, functions with dynamic arguments are call-by-name, which means that they are called multiple times. there is no protection against side affects, so a function may return different values for the same arguments. 218.214.148.10 00:10, 27 September 2006 (UTC)Reply

scheme uses call-by-reference

edit

According to the current draft for r6rs, scheme uses call-by-value, but not in contrast to call-by-reference, but in contrast to lazy evaluation. It also says that Scheme always uses call-by-reference. So in the terminology of this article it uses call-by-reference, not call-by-value. --MarSch 16:59, 20 October 2006 (UTC)Reply

Here's the relevant paragraph, copy-and-pasted from page 5 of draft 91 of R6RS, with two words emphasised:
Arguments to Scheme procedures are always passed by value, which means that the actual argument expressions are evaluated before the procedure gains control, whether the procedure needs the result of the evaluation or not. C, C#, Common Lisp, Python, Ruby, and Smalltalk are other languages that always pass arguments by value. This is distinct from the lazy-evaluation semantics of Haskell, or the call-by-name semantics of Algol 60, where an argument expression is not evaluated unless its value is needed by the procedure. Note that call-by-value refers to a different distinction than the distinction between by-value and byreference passing in Pascal. In Scheme, all data structures are passed by-reference.
Remember that Scheme variables are bound to locations, which in turn contain values. In (say) Pascal, proc(x) can modify the variable x if proc uses call-by-reference (ie., a "var" parameter). In Scheme, (proc x) can never modify the contents of the location bound to x (assuming proc is a procedure). So, strictly speaking, Scheme uses call-by-value.
The complication is that Scheme has "pointer semantics": if x is a vector, the location bound to x holds a pointer to the vector, not the vector itself, and that pointer is copied into proc, which can use vector-set! on the vector. In this case, after proc returns, x is still bound to the same location, and that location still contains a pointer to the vector, but the contents of the vector have been changed. (In Pascal, if x has type (say) array [1..3] of real, using x as a value parameter means copying the whole array and can never modify x.) Similar arguments apply to the other non-atomic types: pairs, strings, records, etc. So (proc x) cannot modify the contents of x, but can indirectly modify the value of x by modifying the data structure to which x refers.
In short, because Scheme has a layer of indirection between variables and data structures, it is call-by-value as far as variables are concerned, but call-by-reference as far as non-atomic values are concerned.
I hope this helps. Cheers, CWC(talk) 10:30, 21 October 2006 (UTC)Reply
The semantics of Scheme are identical to those for objects in Java. The only thing that you can manipulate or pass in Scheme are references. Variables are references, you can give a variable another value by changing where the reference "points to". So in that sense, yes, you pass the "value" of references, so it's pass-by-value. Data structures are structures of references. I usually like to think there are no data structures at all, since they are not fundamental, and can always be constructed using lambda's. If you take this perspective, then the "mutability" of elements of data structures corresponds to the ability to change variables (which are references). And it becomes very simple: you can only deal with references (all expressions are references), and the values those references point to are immutable. --Spoon! 05:58, 22 October 2006 (UTC)Reply
I think I follow you. Since any object can be represented by a closure in which the components of the object are stored as variables, object-mutation primitives like set-car! are in a way special cases of good old set!. Hmm, I want to ponder that one a bit more, because I've been thinking of set! as the location-mutator and set-car! etc as conceptually distinct object-mutators (partly from reading the discussion of SRFI-17). CWC(talk) 13:37, 22 October 2006 (UTC)Reply

Glug... there are multiple competing definitions of call-by-reference. I claim that the most satisfying one is the one that appears in EOPL: "When an operand is a variable and the called procedure assigns to the corresponding formal parameter, is this visible to the caller as a change in the binding of its variable?" Call-by value: no. Call-by-reference: yes. That is, call-by-value vs. call-by-reference has to do with the behavior of assignment to bindings, and not to assignment within structures. The text provides this example:

let p = proc (x) x := 5
in let a = 3;
       b = 4
   in begin
       p(a);
       p(b);
       +(a, b)
     end

Using call-by-value, this produces 7. Using call-by-reference, this produces 10. This question is entirely orthogonal to whether or not changes to passed values (e.g., objects) are visible in the calling procedure. Does anyone mind if I take a crack at a rewrite of this section? Clements (talk) 17:03, 31 October 2008 (UTC)Reply

Evaluation strategy vs parameter passing mechanism

edit

Isn't there an important distinction between concepts like normal and applicative order reduction and concepts like call by reference, call by value and call by name? The former concern the rule(s) used for reducing an evaluation to a final result, while the latter concern how parameters are passed to a function. There's a clear relation, a particular evaluation strategy may restrict the kinds of parameter passing mechanisms that can be used; but the basic concepets are different. Abcarter 14:33, 14 December 2006 (UTC)Reply

I agree with Abcarter. The section seems to confuse the concepts a bit. Applicative order languages (and strict evaluation) is usually used to refer to languages that have call-by-value semantics, or one of the other strict semantics. Likewise, normal order is used to refer to languages with call-by-name or call-by-need semantics. I have never seen the distinction (comparison) made in the article between applicative order and call-by-value. I believe it is incorrect for the reason described above. If not, explicit references to the definitions would be appreciated. —Preceding unsigned comment added by 85.80.230.242 (talk) 18:30, 6 October 2008 (UTC)Reply

At some point, it is implementation details. Evaluation strategy is what it looks like to the user, where call-by is more related to implementation. Call by value can be implemented by passing the address, and the callee making a copy. In the case of large objects and small stacks, that makes sense as an implementation, but it still looks like call by value to the user. Gah4 (talk) 21:58, 25 February 2020 (UTC)Reply

Strict/Eager and Non-strict/Lazy

edit

In the reading that I've done, strict and eager are pretty much equivalent terms. I could easily be wrong, but I'm certainly confused why "Church encoding" is the determining fact here. The issue with non-strict and lazy is not as simple, but in most of the literature I've read lazy is a type of non-strict evaluation where every argument is never evaluated more than once. But the same confusion remains why Church encoding is particularly relevant here. Could someone please provide a citation. Abcarter 01:55, 19 December 2006 (UTC)Reply

Conditional Statements

edit

I deleted the sentence concerning conditional statements in the section on non-strict evaluation. As the definition itself mentions, non-strict evaluation is about how arguments to a function are processed. There is a fundamental distinction here: expressions can be passed to an argument, a command or statement cannot. Abcarter 02:10, 19 December 2006 (UTC)Reply

"Statement" was probably the wrong word in "conditional statement", but the concept was sound; consider the conditional structure in Lisp (which is essentially a function with non-strict evaluation, though in Lisp all "functions" use strict evaluation, so instead it's a "macro"), the ternary operator in C/C++/Perl/etc. (which is an operator, not a function, and indeed not even overloadable in the C++ case so really not a function, but still an example of an operator with non-strict evaluation, unlike most C and C++ operators), and the conditional structure in ML (which is like the ternary operator as used in C and C++, except that ML is stricter about the whole boolean thing). —RuakhTALK 03:21, 19 December 2006 (UTC)Reply
OK, "conditional expression" is a different matter. My own area of interest is functional programming so there's a world of difference between statements (bad) and expressions (good) :). But now consider a conditional expression that returns a list, something of the form:
IF <boolean list> THEN <list expression> ELSE <list expression>
The idea would be to return a list containing values from either the first or second list expression depending on the boolean values. In this case both expressions would be evaluated. —The preceding unsigned comment was added by Abcarter (talkcontribs) 11:48, 19 December 2006 (UTC).Reply
Thought more about it and after some additional reading added back a reference to conditional expressions. Abcarter 16:42, 30 December 2006 (UTC)Reply
I am not seeing an interesting difference between short-circuit evaluation and the use of lazy evaluation in conditional expressions. Isn't lazy evaluation in a conditional expression an instance of short-circuit evaluation? There are three sub-expressions in a conditional expression. If the first sub-expression is true then evaluate the second sub-expression, otherwise only evaluate the third. It's no different than evaluating the Boolean expression (p -> q) v (-p -> r). I've done a 360 on this question a couple of times, so I'm happy to hear a different opinion. Abcarter 04:07, 31 December 2006 (UTC)Reply
Is your question a response to my recent edit? If so, here's my attempt at explaining my thought process: programmers sometimes use the short-circuiting of a language's AND and OR operators to effect conditional-like evaluation (e.g. in something like if(ptr != NULL && *ptr != 0) in C or C++, where &&'s short-circuiting effect prevents ptr from being dereferenced if it's the null pointer), but this isn't really the primary reason for the short-circuiting: the primary reason is to avoid needless computation when the ultimate value of a boolean expression is already known. (Well, in modern languages it can be hard to tell which is the primary reason, as the two are rather intertwined now, but I think a convincing argument can be made that laziness was the original reason, at least.) By contrast, the short-circuiting of a language's conditional operator is absolutely fundamental to its functioning, at least in a language where expressions can have side effects (which so far as I know is the case in every language that has a conditional operator; there are languages where really only complete statements have side effects, but I don't know of any that are of that type and that nonetheless have conditional operators that can be used in expressions). I hope that explanation makes some sense … —RuakhTALK 08:04, 31 December 2006 (UTC)Reply
Yes this was a response. There are a couple of things I might want to say, but first I just want to be sure we're clear on one point. I originally deleted the statement that "Conditional statements" were an instance of non-strict evaluation, but after further consideration and further reading I put that statement back. However in doing so I changed "statement" to "expression", which for me was a fundamental change. Statements are imperative constructions and as such the notion of non-strict evaluation doesn't apply. The essential purpose of a conditional expression is, like any expression, to return a value. Note that my research interest is functional programming where in principle the entire language does not have side effects. Pure functional langauges contain no statements, only expressions, and the expressions never have side-effects. So when you say "at least in a language where expressions can have side effects" I'm a little puzzled. I'm thinking we are not quite talking about the same thing. AbcarterTALK13:27, 31 December 2006 (UTC)Reply
In any real language, there exist expressions with side effects, since real languages need to communicate with the outside world; at least, I've never seen a language without some kind of print statement. Even in a functional language, it would be very confusing, and it would greatly limit the language's expressiveness, if the equivalent of ML's if true then print "foo" else print "bar" printed something other than simply foo.
For that matter, if you'll let me take a broader view, every evaluated expression has the side effect of taking time to evaluate; this is something we make a point of ignoring most of the time, but we'd have difficulty ignoring it in a language where the equivalent of ML's fun fact(x : int) : int = if x <= 1 then 1 else x * fact(x - 1) produced a function that never terminated, because its recursive call was always evaluated, even when not needed.
(I'm using ML for my examples because it's the functional language I know best; if you're not an ML-ian, though, let me know, and I can try to rewrite my examples in ANSI Common Lisp.)
RuakhTALK 17:14, 31 December 2006 (UTC)Reply
There is lots to talk about here, but we should keep the focus of this thread on the relation between non-strict evaluation and conditionals (if you want to post your remarks about languages and side-effects to my talk page I would be happy to respond). I changed "conditional statement" to "conditional expression" exactly because I wanted to restrict the notion to a construct that used conditionals in a way that were no different than other more standard sub-expressions, something as simple as "2+4". The essential purpose of such conditional constructions is to return a value. In some languages conditional expressions can have side-effects, in others they cannot. These kinds of conditional expressions usually use a kind of lazy evaluation that I think is pretty much the same as what you see in short-circuited evaluation of a Boolean expression. In contrast a conditional statement is all about having side-effects, that is its purpose. And here I am in agreement with you, it would be kinda weird to view conditional statements as something like short-circuited evaluation. Abcarter TALK 19:54, 31 December 2006 (UTC)Reply
I don't see what you're responding to. I thought you were talking about my last edit, which had only to do with the purpose of the various non-strict evaluations, but now it doesn't seem like that's what you're talking about at all. —RuakhTALK 22:27, 31 December 2006 (UTC)Reply
I was trying to be clear about what I meant by a conditional expression. I was emphasizing that conditional expressions are no different from any other kind of expression, it's essential purpose is to return a value. If you take this view then the reason you use lazy evaluatons for conditional expressions is pretty much the same reason you use short-circuit evaluation for a Boolean expression. In particular the process is no different than how you would evaluate the Boolean expression (P→Q) & (~P→R). If P is True then you only have to evaluate Q and if P is False then you only have to evaluate R. Abcarter Talk 01:05, 1 January 2007 (UTC)Reply
O.K., sorry, I see what you're saying now. I still disagree, though; with ANDs and ORs there's a minor efficiency gain by using non-strict evaluation, and shortcutting is really just a perk that people can do without (and that people in some languages do do without), whereas with IF-THENs non-strict evaluation is absolutely essential in any language that supports recursion and/or side effects. Maybe we should just remove any mention of the reason being the same or different? —RuakhTALK 09:27, 1 January 2007 (UTC)Reply
So now you mention recursion! That's an entirely different matter since the sole purpose of a recursive function is to return a value. And you're right, the non-strict evaluation is essential for a recursive expression to terminate properly. I don't see any need for an immediate change. As I said at the start I keep going around and around on this topic. Well we've beaten this horse to a pulp, but it was good talking with you. Abcarter Talk 01:41, 2 January 2007 (UTC)Reply
Yeah, it's been an interesting conversation. :-) —RuakhTALK 02:56, 2 January 2007 (UTC)Reply
Fortran, for not so many years now, has the MERGE function which is similar to the C conditional operator ... except ... that as usual for functions, it does not guarantee not to evaluate both expressions. It is an allowed optimization, but not required. Also, the Fortran .AND. and .OR. operators allow but don't require short-circuit evaluation. I suspect that short-circuit logical operators were added to C for the usefulness of avoiding evaluating things that shouldn't be evaluated more than saving time, but it is hard to know. Gah4 (talk) 00:31, 26 February 2020 (UTC)Reply

Content seems to cover only function calls... what about expressions in general?

edit

Perhaps I'm ignorant... but when I clicked the link to here from the Expression page, and read the first sentence ... An evaluation strategy (or reduction strategy) for a programming language is a set of (usually deterministic) rules for defining the evaluation of expressions under β-reduction. ... I thought I found the page I was looking for. But there's nothing about expression evaluation, in general. Does anyone else see the need for an explanation of evaluation strategy relating to operators, not just the various forms and issues of function calls? Maybe the research just has not been done yet. Comments? AgentFriday 02:46, 28 December 2006 (UTC)Reply

Question, at the level that we are talking about is there a real difference between how a function is handled and how an operator is handled? Isn't a primitive operator simply a built-in function with perhaps a different syntax? Abcarter 17:38, 28 December 2006 (UTC)Reply
Two answers :) 1: Perhaps a majority of the visitors to this page are not thinking at, or expecting, such an academic discussion on expression evaluation. I personally find the discussion interesting, but feel the page is lacking in coverage of infix notation, and strategies for resolving it. 2: Yes, every operation in an expression can be written as a function call, and in fact that is more obvious as to order of evaluation. However, given an expression in a standard programming language, you must still apply the rules of precedence and associativity first in order to know how to group the function calls. ... After some more searching, I found the page Common operator notation, which at least touches the surface of what I was expecting to find here. Perhaps it would be beneficial for this page to have a link there. It also may be that the link to here from Expression (and others) is inappropriate, or at least could use qualification. I'll trust that the title is appropriate for the content, but the first paragraph makes several references to expressions, which in the classical sense (and even in the context in which they are referenced) are infix notation. An evaluation strategy is also needed for this type of expression, but the article makes no reference to such or why it is not covered. Looking over the articles that link to this one, about 1/3 seem to be from pages mentioning expression evaluation (with no hint of the function-specific discussion ahead), about 1/3 are links from function-related topics (appropriate), and the rest seem to be miscellaneous "see also" links, etc. To me, there seems to be a disconnect. Anyway, thanks for reading. I thought some change was in order, but decided I was not qualified to jump in and make changes in this case. Hope you can see my point. AgentFriday 00:56, 29 December 2006 (UTC)Reply
Surprisingly enough I think we're talking about two entirely different concepts. Just to confirm this I'm assuming that you are concerned with expressions like 2 + 5 * 3 and whether this evaluates to 2 + (5 * 3) = 17 or to (2 + 5) * 3 = 30. In particular you're concerned with rules or precedence. If so then my initial remark stands. Evaluation strategy as referred to in this entry is in fact a technical topic dealing with the semantics of a programming language. In this sense an expression is simply a syntactic construct that can be reduced to a data value. Don't get confused by the examples that happen to use infix notation. It's just easier and more clear to write 2 + 3 than add(2, 3), but at this level of discussion there is no conceptual difference, they're both just expressions. None of this is to say that you're concerns don't deserve it's own entry and it may well make sense to mark this distinction and provide a link. Abcarter 13:58, 29 December 2006 (UTC)Reply

I suppose as this is Evaluation strategy and not procedure calling method that question makes sense, but even the latter becomes important in some case. For languages that allow for aggregate (structures, arrays, or combinations of the two) expressions, evaluation strategy is still important. PL/I, which has allowed for array and structure expressions since its origins in the early 1960's, requires that changes take effect immediately, similar to call by reference. Fortran, since array expressions were added in 1990, requires that the whole right hand side of an assignment be evaluated before the left side is changed (at least in the cases where it matters). This could be considered similar to call by value. Gah4 (talk) 08:40, 13 June 2017 (UTC)Reply

Scheme of memory context

edit

Could anyone add some small picture that demonstrates the interactions between function and calling function? I am no expert in programming languages and for me it would just be helpfull to see what symbols can be passed from one context into another.

217.10.60.85 (talk) 10:30, 21 November 2007 (UTC)Reply

Java and call-by-reference

edit

I have removed the following sentence from the section on call-by-reference:

Some languages straddle both worlds; for example, Java is a call-by-value language, but since the results of most Java expressions are references to anonymous objects, it frequently displays call-by-reference semantics without the need for any explicit reference syntax.

As noted elsewhere on the page, Java uses "call-by-value where the value is a reference" or call-by-sharing. This may sometimes give the same result as call-by-reference, but that's a totally different thing to "display[ing] call-by-reference semantics". HenryAyoola (talk) 11:57, 6 October 2009 (UTC)Reply

I suppose, but as long as the reference variable isn't changed in the callee, the effect is the same as call by reference. Gah4 (talk) 08:43, 13 June 2017 (UTC)Reply
IIRC Java passes primitives by copy of value, and objects by copy of reference. It can't pass objects, it always accesses them through references.
Not quite pass-by-reference since the caller's variables are inaccessible, only its referenced object content.
Not quite pass-by-sharing, since Java has copied primitives.Musaran (talk) 21:53, 15 November 2023 (UTC)Reply

visual basic "VB(A)" is default call-by-reference according to microsoft

edit

In your Article you have: "A few languages, such as C++, PHP, Visual Basic .NET, C# and REALbasic, default to call-by-value"

http://msdn.microsoft.com/en-us/library/ee478101%28v=vs.84%29.aspx : "In a Sub or Function declaration, each parameter can be specified as ByRef or ByVal. If neither is specified, the default is ByRef."

but i'll allow that very few VB programmers can state this with certainty -sigh- — Preceding unsigned comment added by 50.159.60.209 (talk) 21:54, 22 December 2013 (UTC)Reply

I don't know, but it seems that VB(A) is not Visual Basic.NET. http://msdn.microsoft.com/en-us/library/ddck1z30.aspx states "The default in Visual Basic is to pass arguments by value." Glrx (talk) 21:08, 26 December 2013 (UTC)Reply

Call by address

edit

There should be no distinct paragraph for call by address, since this article is about language semantics, not programming techniques. I insist that it should be merged back to call-by-reference. The term "call by address" just should be mentioned, and that's all. Moreover, there were made two shortcomings. First, call by address is memory-unsafe only in C, but not in ML - so this is not common. Second, the code sample in C was more informative, showing three cases of using pointers in C, wich is important when comparing call-by-value with call-by-address. Arachnelis (talk) 10:03, 1 June 2014 (UTC)Reply

Insist seems a little strong, as in general it should be the consensus of editors, but otherwise agree that call by address should be an alternative name in the call by reference section. Gah4 (talk) 08:46, 13 June 2017 (UTC)Reply

Ok Kelvin1980 (talk) 04:10, 5 November 2017 (UTC)Reply

Pythonisms in "call by sharing" section

edit

Citing the section Evaluation strategy#Call by sharing:

For immutable objects, there is no real difference between call by sharing and call by value, except for the object identity.

This needs some clarification without going to deeply into the idiosyncrasies of Python. What is meant is that each object has an "identity", typically represented as the bit pattern of a pointer to the object, and a function can find out this identity and act accordingly. Should we just rephrase this as

For immutable objects, there is typically no difference between call by sharing and call by value.

and leave the issue of identity aside?

The section also has one more Python-specific part, where the assignment to a variable is described (it previously claimed a "new variable" was created, which strictly speaking is not what happens). This is really language-specific. One can imagine a language where assignment is an overloadable in-place operation, (like in C++ with operator=, and like some assignments in Python with __setitem__), in which case a let-style construct is needed to express the function:

# pseudo-Python
def f(l):
    l = [1]

def g(l):
    let l = [1, 2, 3]

m = []
f(m)
# m is now [1]

g(m)
# m is still [1]

QVVERTYVS (hm?) 12:50, 4 June 2014 (UTC)Reply

Is JavaScript call-by-value or call-by-reference?

edit

Is JavaScript call-by-value or call-by-reference? specifically calls out this article:

The Wikipedia entry for Evaluation Strategy is long, confusing and sometimes contradictory because the various terms have been inconsistently applied to languages throughout the years.

Seem like the article has a relatively clear explanation but I should find a couple more weighty sources. II | (t - c) 07:58, 29 May 2016 (UTC)Reply

I think said blogger is complaining more about other programmers using these terms in a confusing and contradictory way than that the Wikipedia article is (or at worst that the latter necessarily reflects the confusion of the former). The term "call-by-value" was originally a particular evaluation strategy of lambda calculi (contrasted to e.g. "call-by-name"). In imperative languages that term was repurposed as a particular argument passing scheme (contrasted to e.g. call-by-reference). (The article doesn't distinguish clearly enough between these two concepts, evaluation strategy and parameter passing, at the moment. Or rather, it tries quite hard to make them seem like the same thing.)
To whole point of the blogpost seems to be to redefine the term call-be-reference to be something more specific than what most people mean when using that term. If you pass a reference to an object to a function, but that reference is passed as a value instead of another reference, then this should, according to him, still be called "call-by-value" instead of "call-by-reference". I think this is a valid distinction to make, but not one that is often made in practice. (The article doesn't seem to definitively commit to one or the other.)
Ruud 11:16, 29 May 2016 (UTC)Reply

As long as the callee in a call by value system doesn't modify a passed reference, the effect is the same as call by reference. But in languages like C and Java, the callee is allowed to modify a reference, such that it refers to something else. In this case, passing a reference by value is the right explanation. Gah4 (talk) 08:51, 13 June 2017 (UTC)Reply

I don't think Python community favors "call-by-sharing" as a term

edit

But I'm willing to be disproved. Right now, the article says "Although this term has widespread usage in the Python community, identical semantics in other languages such as Java and Visual Basic are often described as call by value, where the value is implied to be a reference to the object.[citation needed]". Except even the Python tutorial avoids using "call-by-sharing" as a term; it describes the language as "call by value" with a footnote that indicates that "call by object reference" might be a more precise term. It doesn't use "call by sharing" at all. —ShadowRanger (talk|stalk) 02:11, 23 August 2016 (UTC)Reply

Is client-side scripting an evaluation strategy?

edit

Since remote evaluation is defined as the server executing code from the client, and client-side scripting is defined as the opposite, isn't client-side scripting an evaluation strategy too? --146.140.210.15 (talk) 16:51, 7 February 2017 (UTC)Reply

call by descriptor

edit

There is no section for call by descriptor, where a descriptor, sometimes called a dope vector, is passed, usually by address, to the called routine. This is used by PL/I for passing strings, arrays, and structures, by Fortran 77 for passing strings, and by Fortran 90 and later for assumed shape arrays. The descriptor contains array bounds, and an address (or two) for use by the callee to access the data. Also, VAX allows for %val, %ref, and %descr to override the default calling convention, especially useful with mixed language programming. Gah4 (talk) 08:59, 13 June 2017 (UTC)Reply

After forgetting that I wrote the above, I am back here again. The PL/I page indicates that it uses call by reference, but in many cases a descriptor (generally), or dope vector (what IBM documentation calls it) is used. For one, this passes size and bounds information, but also it allows for non-contiguous data, such as array sections. Should we have a call by descriptor page? Gah4 (talk) 20:24, 7 May 2019 (UTC)Reply
Seems a good idea. Those thinking that call-by-reference was the usual scheme in say Fortran, so usual as to not receive much mention, will (like me) be disconcerted by the results of passing certain types of array to a well-trusted routine such as a binary search and find that copy-in copy-out was happening, thus requiring an order N activity after which an order log(N) search was an irrelevant detail contributing to a thirty-fold reduction in speed as reward for the seemingly innocent improvement in readability afforded by combining a group of separate normal arrays into an aggregate array. This was because fancy modern syntax allow fancy parameter usage such as defining an aggregate STUFF(1:N) containing a component NAME and passing STUFF(1:N).NAME as the parameter for a search. But the routine expects contiguous array data, and so a copy of the elements of STUFF(1:N).NAME was made and passed. This is because the routine does not expect to receive a compound entity and so does not expect information such as would be provided by a "dope vector" containing not just the base address (as with pass by reference) but also the "stride" for the addressing of consecutive elements of the array, which is always one for normal contiguous arrays. Such added facilities enable conveniently flexible syntax and can avoid the need for copying array selections (say of a row or a column, or a block somewhere in the interior of a multi-dimensional array), but at the cost of the increased overhead in passing the additional information, plus of course the extra work required for each access to the complex item that has to employ all the added details. Another trick is to pass an allocatable but not-yet-allocated item to a procedure, which decides how big it should be, allocates it (say, it is to develop a sparse-matrix representation), and exits. The caller now has to know what has happened so as to access the item. The "dope vector" is thus itself passed by reference so that its contents may be altered for all to see. As features are added to languages, ever more subtle distinctions become supported as well. NickyMcLean (talk) 03:40, 8 May 2019 (UTC)Reply
Any disagreement on naming it call by descriptor? I learned much of this reading the IBM documentation for the PL/I (F) compiler. One that I remember was that PL/I would pass an array element by reference, so you could call Fortran programs putting an array element in the argument list. That didn't work for CHARACTER, which is always passed by descriptor. Gah4 (talk) 06:24, 8 May 2019 (UTC)Reply
Go ahead and add it, since it's a thing, you know your stuff, and have sources. But a section here should suffice, not a page. Musaran (talk) 22:09, 15 November 2023 (UTC)Reply
I disagree, I think these low-level details like dope vectors and so on are details of the calling convention, the evaluation strategy is still pass-by-reference. I think the right place to add it would be the PL/I page since it is mainly a PL/I internal implementation detail. Mathnerd314159 (talk) 16:56, 17 November 2023 (UTC)Reply

Call by need is not a version of call by name

edit

Evaluation strategy#Call by need Claims Call by need is a memoized version of call by name. However, Lazy evaluation has different semantics from call by name. A typical examples is a routine that uses Simpson's rule to do integration, declaring the integration variable and integrand as call by name. The algorithm depends on evaluating the integrand separately for each value set into the integration variable. Use lazy evaluation and you get the wrong results. Shmuel (Seymour J.) Metz Username:Chatul (talk) 22:20, 9 February 2018 (UTC)Reply

Yes, the difference in semantics is precisely that call by need memoizes the result, and thus only evaluates the argument once. Perhaps "version" is the wrong word; would you prefer "variant"? --Macrakis (talk) 22:41, 9 February 2018 (UTC)Reply

Fortran incorrect

edit

The bits that refer to Fortran are largely incorrect and need tidying up. In this case, the evaluation strategy *is* actually a function of the implementation; the language does not require call by reference. — Preceding unsigned comment added by 84.92.84.4 (talk) 11:23, 18 February 2018 (UTC)Reply

Even more, one implementation might use different ones in different cases. The OS/360 compilers pass scalars by value-result, as it is more efficient (in some cases) but arrays by reference. In some cases, Fortran 90 requires a contiguous array be passed, which might require a contiguous copy to be made, and then copied back on return. Sometimes the decision to make the copy is done at runtime. Gah4 (talk) 01:19, 7 April 2018 (UTC)Reply

On "Call by sharing" and Java

edit

I find the organization of the "call by sharing" / "call by value where the value is a reference" not optimal:

Already before the section "call by sharing" it is referred to under "implicit limitations" of "call by value". Moreover, there it is (correctly) stated that this is frequently referred to as "call by value where the value is a reference", but this description is missing under "call by sharing".

Also, the concept is not even explained under "call by sharing" and only vaguely under "implicit limitations".

However, under "Call by sharing" it is stated (arguably also correctly but confusingly) that in the Java community the same concept is called "call by value".

Putting this together, there are two places for the same concept in the article with a reference from the first to the second but not the other way around and various conflicting names for the same concept.

My suggestions are as follows:

- Shorten "implicit limitations" to a small comment that often only pointers are passed by value and that this will be explained below under "call by sharing"

- Explain there that "call by sharing" is call by value where the value is a reference (= pointer).

- Also explain that intuitively one does not have to think about pointers. Rather one can think about variables directly referring to objects or as names of objects. (Actually this seems to be the cultural difference between the Java and the python communities: In the Java community people still have pointers in mind (coming from C++) whereas in python people only have objects in mind. Java people: The values of variables (for non-primitive types) are pointers pointing to objects. Python people: "Names refer to objects", as stated in the documentation[1]. If one thinks about what the second really means, on arrives at the first, but still the intuition is different.)


In addition I want to point out a minor statement in the article which is actually nonsensical:

"It [call by sharing] It is used by languages such as Python,[7] Iota,[8] Java (for object references), Ruby, JavaScript, Scheme, OCaml, AppleScript, and many others."

With the comment on Java one arrives at this nonsensical statement: In Java call by sharing is used for object references, that is object references are passed via references (to the object references). In fact, the references are passed by value.

A corrected statement is "Java (for non-primitive data types)".

Claus from Leipzig (talk) 19:51, 12 August 2018 (UTC)Reply

References

global variables

edit

Should there be a section discussing the use of global variables, such as Fortran COMMON, or external structures in other languages, to pass data to/from called routines? My first thought was that this could be call by sharing, but that seems to be something else. Gah4 (talk) 20:20, 7 May 2019 (UTC)Reply

I support this. It is not strictly a "call by" mechanism, but it certainly is used to "pass" parameters and results.
Note it does not need to be global, a known and accessible allocation from previous scopes may suffice. Like a variable placed in a register by an optimizing compiler. Musaran (talk) 22:21, 15 November 2023 (UTC)Reply

Variables are not called

edit

Variables are not called, but they are passed to a sub-routine or function. So the proper wording is "pass by ...", but not "call by ...". — Preceding unsigned comment added by 89.12.24.157 (talk) 06:57, 29 April 2020 (UTC)Reply

You may be right for pass by reference vs pass by value, but call by value, call by name, call by need, and call by push value are technical terms. Although you may disagree with its naming, Wikipedia can't and shouldn't try to change the standard terminology. Ebf526 (talk) 20:01, 2 July 2021 (UTC)Reply

Lenient evaluation

edit

It'd be nice to include some information about lenient evaluation, a non-strict evalutation strategy in-between laziness and strictness used by both Id (programming language) and the parallel Haskell project. Some relevant sources are the Id reference manual and any of the references in How much non-strictness do lenient programs require?.

  Done I added a section, and cleaned up the earlier part of the article to distinguish non-strict from lazy. I didn't use the Id manual, that doesn't even use the phrase "lenient evaluation". --Mathnerd314159 (talk) 22:52, 7 January 2022 (UTC)Reply

Evaluation strategy is specified by the programming language definition, and is not a function of any specific implementation

edit

Note that Fortran allows for either call by reference or call by value result. So, in fact, it is specific to the implementation. That is true for the ANSI standards from Fortran 66 to the most recent version. Fortran IV is mostly close to ANSI Fortran 66, though might be different for different machines. The OS/360 Fortran IV compilers pass arrays by reference and scalars by value result. Gah4 (talk) 01:57, 8 January 2022 (UTC)Reply

I don't think that Fortran does allow the implementation to choose, only the programmer. Can you point in the Fortran standard or the manual of a Fortran compiler a relevant quote? Nowhere man (talk) 00:35, 29 January 2023 (UTC)Reply
It is mostly in the aliasing rules. If your program can tell which one is in use, then it has aliasing which isn't allowed. Gah4 (talk) 06:59, 30 January 2023 (UTC)Reply
The IBM OS/360 Fortran manual] describes its calling methods, and its optional by location dummy arguments enclosed in slashes. The latter (enclosing in slashes) is an extension. Everything not grayed out is standard. Gah4 (talk) 07:45, 30 January 2023 (UTC)Reply
If aliasing is disallowed, and you can only tell the difference between call by value-result and by reference in the presence of aliasing, then they are semantically the same evaluation strategy, even if the implementation is different. --Macrakis (talk) 14:36, 21 February 2023 (UTC)Reply

call-by-push-value

edit

My edit adding CBPV was reverted saying no reliable source says it's a, evaluation strategy. But the abstract of the article that defines CBPV clearly puts CBPV in the same category as CBV and CBN:

Call-by-push-value is a new paradigm that subsumes the call-by-name and call-by-value paradigms

— Paul Blain Levy, Call-by-Push-Value: A Subsuming Paradigm.

Nowhere man (talk) 05:47, 9 April 2024 (UTC)Reply

No, he later defines "subsumes" as a translation between languages. So that's where you get "an intermediate language that embeds call-by-value and call-by-name". And that is a primary source, with MOS:PUFFERY. If you look at secondary sources they pretty much all say it is an intermediate language or a model of evaluation. It is true that the CBN and CBV lambda calculi are in the same category as CBPV (intermediate languages), but the CBN and CBV evaluation strategies are more general - they can be defined for any argument-passing mechanism. My search did not uncover any such generalization of CBPV to argument-passing. There is some speculation in this paper that some sort of hybrid, polarized evaluation strategy similar to CBPV would be possible, but it is just that - speculation, left to future work, not a reliable observation that can be cited by Wikipedia. Mathnerd314159 (talk) 18:50, 9 April 2024 (UTC)Reply