Can't export vector<> from DLL

B

Bob Altman

Hi all,

I have a class that is exported from a DLL. This class includes a private
std::vector<int>. I followed the instructions in this KB article
<http://support.microsoft.com/kb/q168958/> to create a sample app. When I
compile the sample app I get a C4251 warning
('std::_Vector_val<_Ty,_Alloc>::_Alval' : class 'std::allocator<_Ty>' needs
to have dll-interface to be used by clients of class
'std::_Vector_val<_Ty,_Alloc>') in the STL "vector" header.

What magic incantation do I need to perform to get this to compile
correctly?

To repro this problem:

1. Create a new Win32 Console App project named DLLTest. I'm using VS 2005
SP1, but I imagine you'll get the same behavior in VS 2008.

2. Click on Application Settings, select DLL and Export Symbols, and click
on Finish.

3. Replace DLLTest.h with this code:

// --- DLLTest.h---
#ifdef DLLTEST_EXPORTS
#define DLLTEST_API __declspec(dllexport)
#define DLLTEST_EXTERN
#else
#define DLLTEST_API __declspec(dllimport)
#define DLLTEST_EXTERN extern
#endif

#include <vector>

// Instantiate vector<int>
DLLTEST_EXTERN template class DLLTEST_API std::vector<int>;

// This class is exported from the DLLTest.dll
class DLLTEST_API CDLLTest {
private:
std::vector<int> m_test;
};

// --- End of code ---

TIA - Bob
 
J

Jialiang Ge [MSFT]

Hello Bob

It is safe to just disable the warning in the case of vector<int> by using
#pragma warning(disable:4251)
- or -
-wd4251 on the C++ command line.

Do you get any side effect after doing it? I changed your m_test member to
be "public" for simplicity, and consume it in a C++ console program in this
way:
CDLLTest a;
a.m_test.push_back(3);
It works well on my side.

One exception I heard about is that the *transitive closure* of all
instantiations needs to exported, in "depth-first" fashion. In this
particular case, these lines will need to be added

// Needed because vector<int> references this
template class DLLTEST_API std::allocator<int>;
template class DLLTEST_API std::vector<int>;

This is usually not needed, so simply disabling the warning can be chosen.

Also note that the only STL container that can currently be exported is
vector. The other containers (that is, map, set, queue, and list) all
contain nested classes and cannot be exported.

Regards,
Jialiang Ge ([email protected], remove 'online.')
Microsoft Online Community Support

=================================================
Delighting our customers is our #1 priority. We welcome your comments and
suggestions about how we can improve the support we provide to you. Please
feel free to let my manager know what you think of the level of service
provided. You can send feedback directly to my manager at:
(e-mail address removed).

This posting is provided "AS IS" with no warranties, and confers no rights.
=================================================
 
B

Ben Voigt [C++ MVP]

Bob Altman said:
Hi all,

I have a class that is exported from a DLL. This class includes a private
std::vector<int>. I followed the instructions in this KB article
<http://support.microsoft.com/kb/q168958/> to create a sample app. When I
compile the sample app I get a C4251 warning
('std::_Vector_val<_Ty,_Alloc>::_Alval' : class 'std::allocator<_Ty>'
needs to have dll-interface to be used by clients of class
'std::_Vector_val<_Ty,_Alloc>') in the STL "vector" header.

What magic incantation do I need to perform to get this to compile
correctly?

Use PIMPL or COM-style pure virtual classes so that these internal data
structures aren't included in the header used by the client.

dllexport of classes in general, and STL in particular, is fraught with
peril and highly discouraged.
 
B

Bob Altman

One exception I heard about is that the *transitive closure* of all
instantiations needs to exported, in "depth-first" fashion. In this
particular case, these lines will need to be added

// Needed because vector<int> references this
template class DLLTEST_API std::allocator<int>;
template class DLLTEST_API std::vector<int>;


Ahhh... the elusive "magic incantation". That gets rid of the warning.
Thanks Jialiang.

This whole thing came about because of the following real-world problem: I
recently added a new virtual function to a class that is exported from a
DLL. Clients may override its implementation, but if they don't then I
provide a default implementation, which is nothing more than "void* MyFunc
{}". To my great surprise, the new DLL isn't backward compatible. If I run
an app that was built with the old version of the library it fails in the
new version of the library with an access violation in some RTTI*something*
routine, which was called by compiler-generated code somewhere in the
vicinity of a call to my new virtual function. I recalled that this library
had a bunch of private vectors with warning 4251 suppressed. Until now that
didn't seem to be a problem, but I strongly suspect that some
compiler-generated code is failing while trying to get run-time info about
these private variables. So, to fix the access violation problem, the first
thing I want to do is to get rid of the 4251 warnings. (I have seen another
case where I resolved a strange access violation by correctly exporting the
instantiation of a template that was only used for a variable that was
marked "private" in the interface.) I removed the #pragma lines that
disabled the warning and added the "template class...std::vector" code, but
I was still getting the warning that it wants me to export the allocator (in
addition to exporting the vector). I had tried exporting the allocator, but
it didn't occur to me to put that line *before* the line that exports the
vector. In other words, I had:

// Needed because vector<int> references this
template class DLLTEST_API std::vector<int>;
template class DLLTEST_API std::allocator<int>;

This code still gets the 4251 warning. But simply putting the "allocator"
line before the "vector" line makes the warning go away, which makes sense
since vector references allocator, so you need to export allocator prior to
exporting vector.

I'm writing this response from home. I have no idea yet whether eliminating
the warning will, in fact, fix the access violation. I'll find out in a
couple of hours when I get to work and try it...

Bob
 
B

Bob Altman

Brian Muth said:
Ben, I second this. Bob, this is a Really Bad Idea.

Brian

Oh, I heartily agree with both of you, but I don't know how to avoid it.
What are the alternatives ("PIMPL" or "COM-style pure virtual classes")?

Bob
 
D

Doug Harrison [MVP]

This whole thing came about because of the following real-world problem: I
recently added a new virtual function to a class that is exported from a
DLL. Clients may override its implementation, but if they don't then I
provide a default implementation, which is nothing more than "void* MyFunc
{}". To my great surprise, the new DLL isn't backward compatible. If I run
an app that was built with the old version of the library it fails in the
new version of the library with an access violation in some RTTI*something*
routine, which was called by compiler-generated code somewhere in the
vicinity of a call to my new virtual function.

Given that you changed the vtbl layout by adding a virtual function, you
should expect problems like that if you don't recompile all the clients
with the same compiler version and settings. What you're doing is very much
like linking to a static library WRT compilation dependencies, and that's
how you need to think of it. The fact that an "RTTI*something*" function
was called may have had something to do with your newly incompatible vtbls,
as RTTI supporting data can be pointed to through the vtbl.
 
D

Doug Harrison [MVP]

Oh, I heartily agree with both of you

Template static data notwithstanding, the method is fine as long as you
realize that your DLL code partitioning does not create compilation
partitions. That is, you're desiring to share full-fledged C++ classes
across a process composed of dynamically linked modules, and you want the
result to behave like a statically linked C++ program. You can come very
close to accomplishing that, but you have to treat it as static linking WRT
compilation dependencies. I went over that in another reply to you so won't
belabor the point here.
but I don't know how to avoid it.
What are the alternatives ("PIMPL" or "COM-style pure virtual classes")?

Ways to avoid creating compilation dependencies by hiding class internals
from users of the class. They are a pain in the neck here, because you
can't do natural things like passing std::strings back and forth between
modules. If you want to reinvent COM, I can only say, "Knock yourself out,"
because you will. :) That said, in your other message, you explained that
you modified the base class vtbl and expected derived classes in other
modules to be unaffected. Well, COM won't save you from that, except by
making you regard the base class (or "interface" in COM-speak) as
immutable. IOW, you wouldn't have experienced the problem because you
simply wouldn't have done what you did. Instead, you would have created a
brand new interface, say, MyClass2, which would have had no effect on
existing derived classes, because there wouldn't be any.
 
B

Bob Altman

Doug Harrison said:
Given that you changed the vtbl layout by adding a virtual function, you
should expect problems like that if you don't recompile all the clients
with the same compiler version and settings. What you're doing is very
much
like linking to a static library WRT compilation dependencies, and that's
how you need to think of it. The fact that an "RTTI*something*" function
was called may have had something to do with your newly incompatible
vtbls,
as RTTI supporting data can be pointed to through the vtbl.

Ok, lets explore that a little more. I eliminated the C4251 warnings by
creating exports for the template classes used by variables marked as
"private" in the interface, but that was apparently a red herring because I
still get an access violation in a routine identified by the debugger as
MyTemplateClass<long>::'RTTI Complete Object Locator' (where MyTemplateClass
is a class exported by my library).

The client is super simple:

int _tmain(...) {
// Create an instance of the class on the stack
MyClass xxx;

// Call into it
xxx.DoSomething();

// All done
return 0;
}

In this code, MyClass is the class, exported from a DLL, that I've modified
by adding a new virtual function. When I run this in the debugger (that is,
the previously-built client plus the new DLL) I can step into the
DoSomething routine and step right up to the point where it calls into the
new virtual function. If I look at the assembly code I see instructions
fiddling with the stack and doing what appears to my untrained eye to be
setting up for a vanilla subroutine call. But the call takes me to an
unexpected location (in this case the ''RTTI Complete Object Locator'
function).

Now, here's where things get ugly for me: Presumably, the instructions I
see prior to the call instruction are fetching the target routine address
from the vtable. That code is in the DLL, but I guess the vtable is in the
client (since the client can override the virtual functions to provide
alternate implementations of them).

So, am I stuck? Is there any way to add a virtual function to MyClass
without having to recompile all of the existing clients?

And here's the larger question: MyClass is a "base class" with virtual
functions that the clients are expected to override. How can I provide this
base class to my clients in a way that makes it easy (or at least
*possible*) to extend it in the future in a backward-compatible way?

Bob
 
D

Doug Harrison [MVP]

Ok, lets explore that a little more. I eliminated the C4251 warnings by
creating exports for the template classes used by variables marked as
"private" in the interface, but that was apparently a red herring because I
still get an access violation in a routine identified by the debugger as
MyTemplateClass<long>::'RTTI Complete Object Locator' (where MyTemplateClass
is a class exported by my library).

The client is super simple:

int _tmain(...) {
// Create an instance of the class on the stack
MyClass xxx;

// Call into it
xxx.DoSomething();

// All done
return 0;
}

In this code, MyClass is the class, exported from a DLL, that I've modified
by adding a new virtual function. When I run this in the debugger (that is,
the previously-built client plus the new DLL) I can step into the
DoSomething routine and step right up to the point where it calls into the
new virtual function. If I look at the assembly code I see instructions
fiddling with the stack and doing what appears to my untrained eye to be
setting up for a vanilla subroutine call. But the call takes me to an
unexpected location (in this case the ''RTTI Complete Object Locator'
function).

Now, here's where things get ugly for me: Presumably, the instructions I
see prior to the call instruction are fetching the target routine address
from the vtable. That code is in the DLL, but I guess the vtable is in the
client (since the client can override the virtual functions to provide
alternate implementations of them).

So, am I stuck?
Yep.

Is there any way to add a virtual function to MyClass
without having to recompile all of the existing clients?
Nope.

And here's the larger question: MyClass is a "base class" with virtual
functions that the clients are expected to override. How can I provide this
base class to my clients in a way that makes it easy (or at least
*possible*) to extend it in the future in a backward-compatible way?

Not sure what you mean by "backward-compatible". If you want to be able to
replace your DLL with a new version, you are very limited in what you can
do. As long as you aren't linking by ordinals, it is possible to add
non-virtual and static member functions. You can mess around with static
member data to the extent that the DLL clients don't use it. If you
add/remove non-static member data, change the type or order of non-static
member data declarations, add/remove virtual functions, change virtual
function signatures, reorder existing virtual functions, and so forth, all
clients will have to be recompiled. Also, they will have to be recompiled
if you change compiler versions or settings in ways that affect them. If
that's a significant problem for you, you're in the scenario for which
exporting whole classes is inappropriate, and you really should consider
something like COM. Again, if you want to export whole classes, you must
think of it like static linking, with the additional restriction that all
modules must link to the same CRT DLL.
 
B

Bob Altman

So, am I stuck?
Yep.


Nope.

But I assume that I should have no problem if I don't change the MyClass
interface and instead create a MyClass2 class that inherits from MyClass and
extends its interface? Existing clients would happily keep on consuming
MyClass, and new clients can use MyClass2 and its new, improved interface.

Bob
 
D

Doug Harrison [MVP]

But I assume that I should have no problem if I don't change the MyClass
interface and instead create a MyClass2 class that inherits from MyClass and
extends its interface? Existing clients would happily keep on consuming
MyClass, and new clients can use MyClass2 and its new, improved interface.

Of course. In COM, the only shared class data is in the form of vtbls, and
COM specifically forbids modifying a published interface, i.e. changing its
vtbl, because to do so is to break binary compatibility with clients, which
cannot be recompiled. So the only option is to create a new interface,
which is why we have IContextMenu, IContextMenu2, IContextMenu3, and so
forth. That said, the other restrictions I talked about such as requiring
everyone to link to the same CRT DLL still apply as long as you're sharing
whole classes. COM does not have that restriction; indeed, the opposite is
sort of true. No COM DLL should link to a CRT DLL, because multiple modules
in a process that link to the same CRT DLL share all their CRT state, and
it's pretty easy to step outside the nominal black box.
 
B

Ben Voigt [C++ MVP]

Bob said:
Oh, I heartily agree with both of you, but I don't know how to avoid
it. What are the alternatives ("PIMPL" or "COM-style pure virtual
classes")?

http://en.wikipedia.org/wiki/Pimpl

Pure virtual classes are similar, but instead of putting the implementation
inside a helper class, it goes inside a derived class. This allows you to
use more natural C++ style for the implementation because the private data
are members of the class implementing the public member functions. The
important thing is that the implementation class type is never seen by the
client, only the base class which consists purely of pure virtual function
declarations. Because the client can't see the most-derived type, it can't
create objects using the new operator. Instead, you provide a static or
global function to create an object and return it as a
pointer-to-public-base-class.

Both pay an abstraction penalty to performance due to the additional
indirection. For PIMPL, it is for accessing private members, for pure
virtual classes it is when the client calls member functions. Either way,
the cost is directly related to the chattiness of the interface. Using
simple getter/setter methods will tend to be more expensive due to the large
number of these calls, because they can't be inlined. But minimizing the
number of calls across a library boundary is usually a good idea anyway.
 
B

Ben Voigt [C++ MVP]

Of course. In COM, the only shared class data is in the form of
vtbls, and COM specifically forbids modifying a published interface,
i.e. changing its vtbl, because to do so is to break binary
compatibility with clients, which cannot be recompiled. So the only
option is to create a new interface, which is why we have
IContextMenu, IContextMenu2, IContextMenu3, and so forth. That said,
the other restrictions I talked about such as requiring everyone to
link to the same CRT DLL still apply as long as you're sharing whole
classes. COM does not have that restriction; indeed, the opposite is
sort of true. No COM DLL should link to a CRT DLL, because multiple
modules in a process that link to the same CRT DLL share all their
CRT state, and it's pretty easy to step outside the nominal black
box.

COM DLLs can certainly use the CRT and Standard C++ library internally. But
only pointers to POD and pure interfaces should be part of the public API.
 
B

Ben Voigt [C++ MVP]

Doug said:
Do you think I said they cannot?

You said "No COM DLL should link to a CRT DLL". Reconsidering that, I guess
you were saying to statically link the runtime, but I still disagree. If
you follow the rules about respecting memory ownership and treat CRT types
as per-module, you won't have any trouble with either statically linked or
DLL CRT.
 
D

Doug Harrison [MVP]

You said "No COM DLL should link to a CRT DLL". Reconsidering that, I guess
you were saying to statically link the runtime, but I still disagree. If
you follow the rules about respecting memory ownership and treat CRT types
as per-module, you won't have any trouble with either statically linked or
DLL CRT.

If you use the DLL form, you also must avoid directly or indirectly using
things like setlocale, set_new_handler, errno, and so forth. If you don't,
and your CRT coincides with some other module's CRT, you will change the
other module's CRT state. For something that is supposed to be a black box
that has no effect on anything around it, these are some pretty draconian
(and apparently subtle) restrictions.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top