Amazing LINQ for .Net

  • Thread starter William Stacey [MVP]
  • Start date
C

Colin Stutley

A nice extension to the language ... but how does this get applied to a
database connection??

I love the strong type syntax, and would love the ability to do away with
SQL statements which are only validated @ runtime. The examples I have seen
so far apply to object collections - what is the syntax when used against a
database connection (or did I miss it because of it simplicity).

- Colin
 
J

Jon Skeet [C# MVP]

Yes, I imagine that "var" _will_ become the usual way to declare local
variables, not because it removes type checking, but just because you
get the same thing with less typing.

No, you can't get the same thing. You can get the same binary output,
but you don't get the same level of readable code.
 
C

Christoph Nahr

Looks like everybody's getting hung up on some silly throwaway
examples. Of course nobody will ever write "var x = 5" instead of
"int x = 5", that would be silly!

The real use of var is for types with lengthy names that are
instantiated with the new keyword rather than literal initializers, or
else for expression where the type is either obvious or unknown.

Some *real* examples for how useful var can be:

// much more readable than repeating the long identifier!
var x = new MyCompany.MyLongNamespace.MyLongTypeName();

// obviously x is going to be an ElementType
foreach (var x in ElementTypes) { ... }

One of the documents also had a LINQ selection expression where the
returned type was unknown, hence var had to be used; but I can't
reproduce that off the top of my head.
 
F

Frans Bouma [C# MVP]

William said:
This is about the coolest thing since c#. MS just released the bits.
Have not seen any ink on it here, so just spreading the word.
http://objectsharp.com/blogs/barry/archive/2005/09/13/3395.aspx

Linq's concept is great, as in: to be able to formulate language
constructs with your own code. I won't comment on their attempt to
write an O/R mapper.

In detail, some things bugged me. One is the 'var' stuff. I really
don't like it, and share the concerns others have expressed that it
will be abused by novices and we end up with javascript-esk code.
*shiver*.

Another one is more subtle. If you've payed attention to the query
examples, they formulate it something like:

var foo = from t
where ...
select s

(or something, you get the idea).

The first thing you'll notice is that 'select' is at the end and not
at the front of the query expression. So I wondered why, as VB.NET 9
has select at the front. I read that select is at the end of the query
for _intellisense_.

There I draw the line. A general purpose language should NEVER have
its syntax compromised because of intellisense.

Personally I had a bit of a hard time grasping the details of the
extension methods syntaxis, as I find the syntaxis more and more
becoming obscure. Obscure in the sense that it's not clear in one
glance of the eye what the code will do. I have the same feeling with
'yield' in v2.0: it's a trick which breaks with the 'what you read from
top to bottom is what you get'-pattern: all of a sudden execution is
returned, to where, and when it comes back is hidden under the surface.
IMHO that's not good, a language as general as C# should be clear and
simple: what's going on is what you see, right there.

But perhaps it's me, dunno :)

FB


--
 
F

Frans Bouma [C# MVP]

William said:
I see your point. On the other hand we have been able to do:
object s = "literal"

So not sure if it worse or not.

ok very good point! :)

However, every C# developer knows that doing:
object s = "Sometext";
is bad.

However, now it seems that these 'var' constructs are required to make
it properly work together. With a compiler trick they made it typesafe
but it reads horrible. On the uni we once had to write a compiler for a
language which solely derived the types from the values put into
variables, it's no picknick, so I understand it must have been a tough
job to add 'var' in a typesafe manner to the language and compiler, but
still... why not declare the type up front and read it into instances
of that type? What's wrong with a factory pattern implementation?

FB.

--
 
F

Frans Bouma [C# MVP]

Bruce said:
I think that you misunderstand what "var" is in C#. It's not (and
Anders said so today) VB's "non-type", "this-can-be-anything"
declaration. All it does is instruct the compiler to infer the type
from the initialization expression.

I think everyone here understood it that way. :)
Similarly, Anders didn't give this example, but I imagine that:

var x;

would be illegal and result in a compiler error. Without an
initialization, there is no way for the compiler to infer a type for
x.

which IMHO is a weird restriction as 'var x' doesn't mean anything, it
just suggests 'x' is a 'var' and is probably initialized later on. For
the compiler it doesn't make a difference, so I only see this
restriction be there because it would otherwise make code really
unreadable:

var x;

// 20 lines of code, no x usage
x="lalala";

now, if this would make code less readable, why would it make code
READABLE if you do this:

var x="lalala";
// 20 lines of code, no x usage
x="foo";

Though that's not the real problem with 'var'. The real problem is
with query results. 'var' is used to declare variables which hold query
results, and of what type are these? You have to look VERY closely at
the query.

This adds obscurity.
Yes, I imagine that "var" will become the usual way to declare local
variables, not because it removes type checking, but just because you
get the same thing with less typing.

That's a slippery slope. You also use variables 'r', 'a', etc? because
they're less typing? Clarity is KEY, it's very important code is
readable, now and in 3 years from now, by anyone who has to read it.
Dropping some syntax elements to have less code to type is not helping
in the readability department, IMHO.

FB


--
 
A

Alfredo Novoa


The language is very very messy and dirty compared to relational
languages like Tutorial D, D4 or even Quel. And it is hard to count
how many language design principles are violated.

One of the most evident flaws is that you can not define relation
variables nor views.

It seems that they were not advised by database experts.

Although C# 3.0 is infinitely better than C# 2.0 to work with
databases.

References:

http://web.onetel.com/~hughdarwen/TheThirdManifesto/D.grm
http://www.alphora.com/docs/D4LG.html



Regards
Alfredo
 
W

William Stacey [MVP]

The first thing you'll notice is that 'select' is at the end and not
at the front of the query expression. So I wondered why, as VB.NET 9
has select at the front. I read that select is at the end of the query
for _intellisense_.

If yo don't know what t is upfront, you can't strong type in where and
select. Actually, I makes a bit of sense. Anders explains this pretty well
in the Channel 9 vid. As for the var thing, I think this would have been
imposible to do without something like it.
 
A

Alfredo Novoa

Personally I had a bit of a hard time grasping the details of the
extension methods syntaxis, as I find the syntaxis more and more
becoming obscure.

I find the same, specially when I compare it with the crystal clear
syntax of Tutorial D and D4.


Regards
Alfredo
 
B

Bruce Wood

....but you have to realize the intent. The intent was not to embed a
database language into C#. The intent was to make it much easier to get
information _out of_ a database _into_ .NET / C#. I think that it does
that admirably.

For my money, anyone working for me who tries to use this new feature
as a full-scale query language will get a kick in the pants. I have a
database for defining views. These new constructs are just for loading
and sifting through information, not reinventing SQL inside C#

You say that a lot of language design principles are violated... what
in particular don't you like?
 
M

Mattias Sjögren

Frans,
However, every C# developer knows that doing:
object s = "Sometext";
is bad.

That's a pretty blanket statement that I don't agree with. I can think
of at least one situation where code like that is perfectly valid (and
required even), namely when you have to pass a string argument to a
ref object parameter.


Mattias
 
M

Mattias Sjögren

Christoph,
Of course nobody will ever write "var x = 5" instead of
"int x = 5", that would be silly!

Yes it's silly, but that wont stop people from doing it. Every feature
that can be abused will be (I'm sure that's some kind of law of
nature).

The real use of var is for types with lengthy names that are
instantiated with the new keyword rather than literal initializers,

I wouldn't use 'var' for that either. I prefer importing the namespace
with 'using' and letting the editor complete the long type name for
me.


Mattias
 
J

Jon Skeet [C# MVP]

Christoph Nahr said:
Looks like everybody's getting hung up on some silly throwaway
examples. Of course nobody will ever write "var x = 5" instead of
"int x = 5", that would be silly!

Well, Bruce Wood certainly seems to think it's not silly:

<quote>
Yes, I imagine that "var" _will_ become the usual way to declare local
variables, not because it removes type checking, but just because you
get the same thing with less typing.
The real use of var is for types with lengthy names that are
instantiated with the new keyword rather than literal initializers, or
else for expression where the type is either obvious or unknown.

Some *real* examples for how useful var can be:

// much more readable than repeating the long identifier!
var x = new MyCompany.MyLongNamespace.MyLongTypeName();

Funnily enough,

MyLongTypeName x = new MyLongTypeName();

is shorter though - just have a using directive for the long namespace,
and you're fine.
// obviously x is going to be an ElementType
foreach (var x in ElementTypes) { ... }

One of the documents also had a LINQ selection expression where the
returned type was unknown, hence var had to be used; but I can't
reproduce that off the top of my head.

Yes, anonymous types are a reasonable use for "var" - but I for one
*don't* want them to become the usual way to declare (most) local
variables.
 
D

Daniel Jin

even if you go beyond the simple var x = 5 example

what if you write, var x = GetFunkyReturn(); (I assume this should be
legal? since compiler should be able to implicitly tell what x is)

but do you know what x is now? int? List<int>? or something else weird.
we are not discounting the usefulness of var in certain situations, but
it does have its shortcomings as well in producing hard to read code.
 
W

William Stacey [MVP]

Well said. Also, I think most (if not all) of that article does not really
apply.
 
B

Bruce Wood

I think I should clarify my position, before I'm buried under a
mountain of flame. :)

I, personally, would prefer to use explicit type specifiers. I'm
old-school, and you're right: the code is immediately more readable.

However, I have no illusion that "var" will not quickly become the de
facto way to declare local variables. It's too tempting. You have to be
particularly pedantic (and I can be, when I want to) in order to
continue typing even the first few characters of MyLongTypeNameType to
get Intellisense to give you the right choice rather than simply typing
"var".

Yes, the resulting code will become less readable, but I don't think it
will become horribly less readable, for several reasons.

First, there is the IDE to help me. I can mouse over anything and it
will show me its type. I use that feature occasionally now, as
sometimes I'm in the middle of some code, far (20 lines or so) from the
declaration of some variable... how do I know what type it is (if I
even care, see below)? I don't go looking for the declaration; I just
mouse over the name in question.

Second, I find that I often don't even care what type it is. Sure, if
it's a primitive numeric type then it matters in my calculations what
type it is, but then I, like the compiler, can immediately tell that

var abc = 5; // is an integer, and
var def = 1.0m; // is a decimal

Yes, it's prettier and easier to read to have the "int" and "decimal"
keywords there, but their absence isn't going to make the code
unreadable for me. I think I'll be able to adapt. :)

For reference types, why do I care what type it is? I can't remember a
bug in which I used the wrong reference type in the wrong place. If I
understand the problem domain, type is almost always obvious from
context, because I'm no longer concerned about semantic correctness at
that level: the compiler is more and more taking over that worry.
Again, I can't remember the last time I was reading code searching for
the type of something. I'm usually reading code trying to determine
where the implementation is flawed, or the design is bad.

I'm not saying that having that type name there is useless. It _does_
aid readability. I'm just not sure that it aids it so much that taking
it away would make my job of reading code all that much more difficult.
As I said, I think I can adapt. (It looks as though I'm going to have
to, anyway... I doubt I can convince my colleagues to forgo the use of
"var" :)

This leaves printed code. Here, I think that there could be a problem.
Without Intellisense to help out, Daniel is right: on a printed page
figuring out what type is returned by a method could be very difficult.
However, I see no reason why Visual Studio couldn't allow you to "turn
on" some sort of view that showed you the types along side "var"
declarations. I can see some help being put into the tool, rather than
stopping people from using shorthand, which they will want to do in
droves.

I've seen this steady progeression over the years: the IDEs and tools
take over more and more of the gruntwork of programming, becoming, in
the process, indispensible. I don't really mind it. I see it as a
natural evolution of the programming craft. Just so long as the
languages are unambiguous, and the tool helps you clarify what's going
on.

Will I continue to type

double x = 0.5;

rather than

var x = 0.5;

Probably. I'm that sort of person. :) I won't fault people who go
using "var", though.
 
B

Bruce Wood

I just finished chatting with a fellow on the Visual C# team. He told
me that they're already considering offering transformations like
"change all vars to explicit declarations". I said that rather than
that, I would prefer the option to _view_ or _print_ all vars as
explicit declarations without changing the code. (You know: source code
versioning; flame wars with colleagues who insist on changing
everything _to_ var... that sort of thing. :)

He said he'd pass on the suggestion.

I think that they could do this simply with some colour-coding, or
maybe some subtle font highlighting like the #region stuff: show the
real type instead of "var", but highlight it slightly to let you know
that you're not looking at the real code, and do the same when
printing.

BTW, it really bugs me that collapsed sections print as plain text in
the middle of my source code. I find that jarring. I hope they fix it
to print a collapsed section with similar highlighting to what's on the
screen. Just a pet peeve. :)
 
B

Bruce Wood

Talked to another guy who is actually on the team doing Visual C# IDE.
He says that they've already thought of this, and are planning to offer
ways to "see" the code as though all vars were explicit declarations,
if you want.

He also said that there were two drivers behind "var": 1) anonymous
types, and 2) query results, which can be horribly deeply nested
generics that take up several lines just for the type name. I hadn't
thought of the latter: there's a case in which showing the type name
could make the code _less_ readable.

Of course, I would consider using queries in that way an abuse of the
language. That's one thing I hated in C++: the temptation that
programmers feel to create "über template types" with umpteen zillion
levels of nesting. Generics are elegant and should be kept simple,
IMHO.

So, anyway, when the "show vars as explicit declarations" feature comes
out in Visual Studio... whatever... I can't claim credit. :(
 
C

Christoph Nahr

Well, Bruce Wood certainly seems to think it's not silly:

<quote>
Yes, I imagine that "var" _will_ become the usual way to declare local
variables, not because it removes type checking, but just because you
get the same thing with less typing.
</quote>

But this quote is hardly true for int (also three letters) or string
or double (six instead of three letters, all lowercase... please!).
Funnily enough,

MyLongTypeName x = new MyLongTypeName();

is shorter though - just have a using directive for the long namespace,
and you're fine.

Yes, but sometimes you may wish to avoid importing big namespaces
wholesale just for a single type. Besides, the type name might use
generic type parameters in which case you'd need an individual using
statement just for that type.
Yes, anonymous types are a reasonable use for "var" - but I for one
*don't* want them to become the usual way to declare (most) local
variables.

I'll defy you all and use them with wild abandon on any types longer
than ten characters! :p
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top