Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent writing to > 1 stream in a transaction #287

Closed
damianh opened this issue Nov 8, 2013 · 54 comments
Closed

Prevent writing to > 1 stream in a transaction #287

damianh opened this issue Nov 8, 2013 · 54 comments
Milestone

Comments

@damianh
Copy link
Contributor

damianh commented Nov 8, 2013

The (original) design purpose of stream is to represent an Aggregate Root and this is the consistency boundary. Writing to >1 stream in a transaction breaks this and is not an officially supported scenario. Besides, not all persistence engines support .net transactions, let alone 'distributed' transactions (DTC).

They purpose of the transaction support was really to allow a user to interact with another piece of infrastructure transactionally, i.e. read from a queue or post to an NServiceBus endpoint.

When a user tries to update more than one stream in a single transaction, we should throw.

Some people like to run with scissors, so I am open to allowing an override to prevent the exception from being thrown that must be explicitly set during wireup. Anyone doing this does so at their own risk and without support.

@ghost ghost assigned damianh Nov 8, 2013
@irium
Copy link

irium commented Aug 25, 2014

Yes, I need with such an override. Writing to > 1 stream is needed functionality for me and I agree to use it in explicit manner.

@bartelink
Copy link

@irium Perhaps you could share a little about your specific case [and the degree to which it requires transaction support to achieve a specific goal]?

@gregoryyoung
Copy link

I'd be interested in the use case normally an altered model removes such
needs

On Monday, August 25, 2014, irium [email protected] wrote:

Yes, I need with such an override. Writing to > 1 stream is needed
functionality for me and I agree to use it in explicit manner.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@damianh
Copy link
Contributor Author

damianh commented Aug 26, 2014

As there are more implementations coming into the fold that don't support
System.Transactions, I'm strongly considering removing transaction support
altogether. It appears to be encouraging the wrong sort of usage too.
On 26 Aug 2014 03:07, "Greg Young" [email protected] wrote:

I'd be interested in the use case normally an altered model removes such
needs

On Monday, August 25, 2014, irium [email protected] wrote:

Yes, I need with such an override. Writing to > 1 stream is needed
functionality for me and I agree to use it in explicit manner.


Reply to this email directly or view it on GitHub
<
https://github.com/NEventStore/NEventStore/issues/287#issuecomment-53349436>

.

Studying for the Turing test


Reply to this email directly or view it on GitHub
#287 (comment)
.

@larsw
Copy link

larsw commented Aug 26, 2014

@irium
Copy link

irium commented Aug 26, 2014

Yes, I agree, that my use case is very specific. And I don't want to use 2PC. Opposite, I want to use only single transaction for ALL. Let's describe. My system is far from high-load, so I decided go away from all that "eventual consistency" crap. My event store, aggregates and read model all resides in single database. Also, my app is plain ASP.NET MVC hosted at provider, so I cannot use all that "workers" for handling queues, async buses etc. I decided that all command and event processing should be synchronous. And ALL changes should go in one transaction. Transaction here is started by web api handler, so one api request - one transaction. By "all changes" I mean: commiting new events to event store, updated aggregates and simultaneously update read model, so read model is always consistent.

Such transaction could easily touch many aggregates, because domain is very complex and contains many "many-to-many" relations. Yes, I've read Vaughn Vernon articles, so I've done huge work to convert them to "one-to-many" ones. But anyway many aggregates changed by single command and I wish to commit them simultaneously.

As for "altered model removes such needs". I also want to store originating commands also for auditing purposes. So I decided to have a separate "committed commands" stream. Obviously, they shoud be persisted together with the events, that are caused by the command. So anyway there will be at least 2 streams to be written together.

All in all I understand, that "there are more implementations coming into the fold that don't support
System.Transactions". And it's sad that by following all that "modern" things "a-la clouds, nosql" etc the library is going to drop some of it's functionality. I've chosen NEventStore at first because it's a general purpose event storing library that is n't limiting me to specific use cases. Let you remember, that events could by stored not only for persisting aggregates, but for many other different reasons.

So I'll be glad, that transaction support could remain in the NEventStore. Perhaps by some sort of extension or other customization mechanism, but al least to have such extensibility point. So transaction-capable persistance engines could leverage that.

@gregoryyoung
Copy link

What happens when you have more than one read model?

Also saying 'eventual consistency crap' sets off my muppet detector.

I mean: commiting new events to event store, updated aggregates and
simultaneously update read model, so read model is always consistent.

No such thing as a consistent read model unless you use pessimistic locking.

Frankly I would spend some time reading up on things as you seem very lost
and probably should not be working on a paid project the current level of
understanding.

On Tuesday, August 26, 2014, irium [email protected] wrote:

Yes, I agree, that my use case is very specific. And I don't want to use
2PC. Opposite, I want to use only single transaction for ALL. Let's
describe. My system is far from high-load, so I decided go away from all
that "eventual consistency" crap. My event store, aggregates and read model
all resides in single database. Also, my app is plain ASP.NET MVC hosted
at provider, so I cannot use all that "workers" for handling queues, async
buses etc. I decided that all command and event processing should be
synchronous. And ALL changes should go in one transaction. Transaction here
is started by web api handler, so one api request - one transaction. By
"all changes" I mean: commiting new events to event store, updated
aggregates and simultaneously update read model, so read model is always
consistent.

Such transaction could easily touch many aggregates, because domain is
very complex and contains many "many-to-many" relations. Yes, I've read
Vaughn Vernon articles, so I've done huge work to convert them to
"one-to-many" ones. But anyway many aggregates changed by single command
and I wish to commit them simultaneously.

As for "altered model removes such needs". I also want to store
originating commands also for auditing purposes. So I decided to have a
separate "committed commands" stream. Obviously, they shoud be persisted
together with the events, that are caused by the command. So anyway there
will be at least 2 streams to be written together.

All in all I understand, that "there are more implementations coming into
the fold that don't support
System.Transactions". And it's sad that by following all that "modern"
things "a-la clouds, nosql" etc the library is going to drop some of it's
functionality. I've chosen NEventStore at first because it's a general
purpose event storing library that is n't limiting me to specific use
cases. Let you remember, that events could by stored not only for
persisting aggregates, but for many other different reasons.

So I'll be glad, that transaction support could remain in the NEventStore.
Perhaps by some sort of extension or other customization mechanism, but al
least to have such extensibility point. So transaction-capable persistance
engines could leverage that.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@irium
Copy link

irium commented Aug 27, 2014

What is "more than one read model" ? Each bounded context has it's own read model which includes of course many views (tables in my case).

Also saying 'eventual consistency crap' sets off my muppet detector.

By 'eventual consistency crap' I mean that eventual consistency has it's own goal and it's own corresponding cost. I don't need the goal - scalability and therefore don't want to pay the cost - eventual consistency. "Crap" here is not that something bad, but means that I don't want to use something because it's "cool" or "modern" thingy.

Each tool/framework/methodology has it's own purpose with pros and cons, so should be selected carefully depending of my concrete situation.

No such thing as a consistent read model unless you use pessimistic locking.

Seems you are very biased toward CQRS+ES. How about CQRS without ES where read model=write model too? In such case read model is always consistent with write one.

Further, I'm not sure that I need in ES as aggregate rehydration source. I'm still in making decision - maybe I'll stop at using traditional ORM-mapped to RDBMS write model (aggregate state). So event store will be used mostly for auditing purpose and as 'single source of truth' for rebuilding entire model when it needed. This will be also useful for reproducing/debugging critical issues.

Frankly I would spend some time reading up on things as you seem very lost
and probably should not be working on a paid project the current level of
understanding.

It's not very polite from you to say such things. How do you know what and how many I've read and whether or not I'm lost? If I would be lost, I could ask for advice (on stackoverflow for ex.) But I precisely know what I want and what I need in my project and all that ddd/cqrs/es technology.

But this is all offtopic. Anyway I could just fork and go on my own.
Thanks.

@gregoryyoung
Copy link

If you don't know what 'more than one read model' is you have not come very
far. Most systems end up with multiple read models. There is also no
concept of read model per context (quite often it's one read model for
multiple contexts/ services)

Do you know the arguments against storing events and domain state as you
describe? Literally hundreds of projects have failed due to this.

As for read models yes your reads are stale unless you use pessimistic
locking. If you read data and I change it is your read still valid? If you
take 10ms just over the wire from your SQL box then your data already had a
10ms sla.

On Tuesday, August 26, 2014, irium [email protected] wrote:

What is "more than one read model" ? Each bounded context has it's own
read model which includes of course many views (tables in my case).

Also saying 'eventual consistency crap' sets off my muppet detector.

By 'eventual consistency crap' I mean that eventual consistency has it's
own goal and it's own corresponding cost. I don't need the goal -
scalability and therefore don't want to pay the cost - eventual
consistency. "Crap" here is not that something bad, but means that I don't
want to use something because it's "cool" or "modern" thingy.

Each tool/framework/methodology has it's own purpose with pros and cons,
so should be selected carefully depending of my concrete situation.

No such thing as a consistent read model unless you use pessimistic
locking.

Seems you are very biased toward CQRS+ES. How about CQRS without ES
where read model=write model too? In such case read model is always
consistent with write one.

Further, I'm not sure that I need in ES as aggregate rehydration source.
I'm still in making decision - maybe I'll stop at using traditional
ORM-mapped to RDBMS write model (aggregate state). So event store will be
used mostly for auditing purpose and as 'single source of truth' for
rebuilding entire model when it needed. This will be also useful for
reproducing/debugging critical issues.

Frankly I would spend some time reading up on things as you seem very lost
and probably should not be working on a paid project the current level of
understanding.

It's not very polite from you to say such things. How do you know what and
how many I've read and whether or not I'm lost? If I would be lost, I could
ask for advice (on stackoverflow for ex.) But I precisely know what I want
and what I need in my project and all that ddd/cqrs/es technology.

But this is all offtopic. Anyway I could just fork and go on my own.
Thanks.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@irium
Copy link

irium commented Aug 27, 2014

If you take 10ms just over the wire from your SQL box then your data already had a 10ms sla.

This will always be true not depending of storage or CQRS used. What I'm talking about - is transactional consistency. Remember that "read committed" transaction level? (If I read data and you change it during or after my transaction completes - than in scope of my transaction your changes simply doesn't exists. I'll see them on next read). I.e. user sees read model in state when his transaction executes.

User expects to see immediate results of his transaction. This is what I'm talking about. And this is what eventual consistency is breaking. I'm not saying that is bad. I'm saying I don't need it. More important - my customer is far from any technical competence and it will be very difficult to explain him why he cann't see just immediate results of his action.

Do you know the arguments against storing events and domain state as you describe?

As far as I know CQRS doesn't neccessarily implies using ES, isn't it? Mostly every article about CQRS suggests to start from just splitting commands from queries, yes? That's what I already did.
So I decided to go slightly further to ES mostly because of auditing needs. ES seems to me (in this project) as mostly easy way to get full auditing together with possibilities for future improvements. I have already described my needs. They don't include classical "aggregate roots' event sourcing". What I really need - is full log of all user actions/events he had done to be able to import it into other system. Imagine sort of "disconnected" scenario - in fact it is - customer need to get some "portable" version of system with limited set of data in it. Then this system is installed on notebook and used somewhere offline. After that he connects to the main system remotely and imports all changes back.
That's my "specific" needs. So I don't see "arguments against storing events and domain state as you describe" here.

Again, this is offtopic. Original issue was about transactions. If NEventStore author will drop support for it - I can just fork and go my own.

@damianh
Copy link
Contributor Author

damianh commented Aug 27, 2014

In event sourcing the stream is the consistency boundary. The original
purpose of the transaction support was to be able to do something like read
from a queue and append to a stream. It's a pity that the implementation
didn't enforce that scenario somehow. Even that premise is weak because
such scenarios are better handled through idempotency... Clearly the
transactional support has allowed (maybe even encouraged) people down the
wrong path, re-enforcing my desire to remove it altogether.

Irium, you can of course stay with v5 and fork if needed.

On 27 Aug 2014 07:01, "irium" [email protected] wrote:

If you take 10ms just over the wire from your SQL box then your data
already had a 10ms sla.

This will always be true not depending of storage or CQRS used. What I'm
talking about - is transactional consistency. Remember that "read
committed" transaction level? (If I read data and you change it during
or after my transaction completes - than in scope of my transaction
your changes simply doesn't exists. I'll see them on next read). I.e. user
sees read model in state when his transaction executes.

User expects to see immediate results of his transaction. This is what
I'm talking about. And this is what eventual consistency is breaking. I'm
not saying that is bad. I'm saying I don't need it. More important - my
customer is far from any technical competence and it will be very
difficult to explain him why he cann't see just immediate results of his
action.

Do you know the arguments against storing events and domain state as you
describe?

As far as I know CQRS doesn't neccessarily implies using ES, isn't it?
Mostly every article about CQRS suggests to start from just splitting
commands from queries, yes? That's what I already did.
So I decided to go slightly further to ES mostly because of auditing
needs. ES seems to me (in this project) as mostly easy way to get full
auditing together with possibilities for future improvements. I have
already described my needs. They don't include classical "aggregate roots'
event sourcing". What I really need - is full log of all user
actions/events he had done to be able to import it into other system.
Imagine sort of "disconnected" scenario - in fact it is - customer need to
get some "portable" version of system with limited set of data in it. Then
this system is installed on notebook and used somewhere offline. After
that he connects to the main system remotely and imports all changes back.
That's my "specific" needs. So I don't see "arguments against storing
events and domain state as you describe" here.

Again, this is offtopic. Original issue was about transactions. If
NEventStore author will drop support for it - I can just fork and go my own.


Reply to this email directly or view it on GitHub
#287 (comment)
.

@damianh
Copy link
Contributor Author

damianh commented Aug 27, 2014

Regarding eventual consistency...

First of all, it's simple to give users 'immediate' results in an
eventually consistent system. There are plenty of patterns for that. If you
use Gmail, amazon, ebay or any multitude of web applications out there, you
are using such systems. The 'eventually' part is usually measured in
milliseconds. Just because a user says 'I want to see the thing I just
added in the list of things' does not mean one big transaction.
On 27 Aug 2014 07:01, "irium" [email protected] wrote:

If you take 10ms just over the wire from your SQL box then your data
already had a 10ms sla.

This will always be true not depending of storage or CQRS used. What I'm
talking about - is transactional consistency. Remember that "read
committed" transaction level? (If I read data and you change it during
or after my transaction completes - than in scope of my transaction
your changes simply doesn't exists. I'll see them on next read). I.e. user
sees read model in state when his transaction executes.

User expects to see immediate results of his transaction. This is what
I'm talking about. And this is what eventual consistency is breaking. I'm
not saying that is bad. I'm saying I don't need it. More important - my
customer is far from any technical competence and it will be very
difficult to explain him why he cann't see just immediate results of his
action.

Do you know the arguments against storing events and domain state as you
describe?

As far as I know CQRS doesn't neccessarily implies using ES, isn't it?
Mostly every article about CQRS suggests to start from just splitting
commands from queries, yes? That's what I already did.
So I decided to go slightly further to ES mostly because of auditing
needs. ES seems to me (in this project) as mostly easy way to get full
auditing together with possibilities for future improvements. I have
already described my needs. They don't include classical "aggregate roots'
event sourcing". What I really need - is full log of all user
actions/events he had done to be able to import it into other system.
Imagine sort of "disconnected" scenario - in fact it is - customer need to
get some "portable" version of system with limited set of data in it. Then
this system is installed on notebook and used somewhere offline. After
that he connects to the main system remotely and imports all changes back.
That's my "specific" needs. So I don't see "arguments against storing
events and domain state as you describe" here.

Again, this is offtopic. Original issue was about transactions. If
NEventStore author will drop support for it - I can just fork and go my own.


Reply to this email directly or view it on GitHub
#287 (comment)
.

@bartelink
Copy link

@irium May I suggest you read from 01:00 AM this AM in https://jabbr.net/#/rooms/DDD-CQRS-ES - It'll definitely help you understand some of the concerns here.

I think it's also critical you listen to Greg - he could have sugar coated it a lot better but you'll find your ideas will eventually become more consistent with his. I have had similar ideas to yours pass through my mind and can understand you're trying to make things simple, but it's fundamentally a really bad idea.

Not sure if you read them, but be careful with confirmation bias reading posts from @jbogard 's about the ES / eventual consistency 'paradoxes' / 'pragmatic' solutions.

@irium
Copy link

irium commented Aug 27, 2014

@damianh Excuse me, but you didn't read what I'm written. I don't need event sourcing, but need event storing. From my point of view theese are different things. I looked for a general, mature and stable event store framework. All of that I've found in NEventStore. From all libs I've seen it seems the best choice. Event sourcing is just an "addition" that goes for free with it. (Surely, from my specific point of view). Maybe in some point of future I'll continue to full "right" event sourcing and then all your words become true.

Again. Splitting single "big" transaction into multiple small ones - one for every stream (aggregate root) - implies that I need to introduce many "process managers" that should control every step transaction is completed successfully. Yes, that's not a big deal, but consequent need to create, implement and maintain huge number and complex "compensating" actions -- seems as over-complicating payment for what I don't really need.

I've read jbogard's and many others' good blogs, articles and so on. (In fact, I'm subscribed to jbogard's great blog and following it for a long time) So I know what I'm say about. Yet another circumstance - I'm under heavy time limit pressure and afraid that implementing full ES in "right way" (with all mentioned compensating actions logic) - will take much more time than I have.

Again, reread the scenario I've described. There's no requirement of "use event sourcing". I need just full domain events log to be able to import it into another system (which in turn doesn't use ES at all).

you're trying to make things simple, but it's fundamentally a really bad idea.

Heh, "keeping things simple" - is what software designs and architectures all about, isn't it? Maybe I'm wrong and such way "will fail", but that will be my own fall and good lesson :)

From all my experience (over 20 years in sw development) - over-complicating things is much worse then "under-complicating". Complication is always can be added later when it really needed, but getting rid of unneeded things - is much harder.

@gregoryyoung
Copy link

What you think is simple is not, determining what is 'simple' on your first
system without operational experience or seeing how such systems maintain
over time is a really bad idea. If you wanted to become a professional
swimmer would you use yourself as a coach or might you be interested In
someone that has done it before? Why not chat with jimmy about just
building an event store in SQL as a table since it was so 'simple'

I doubt that anything written here will change your obviously made up mind
but as an example:

Again, reread the scenario I've described. There's no requirement of "use
event sourcing". I need just full domain events log to be able to import it
into another system (which in turn doesn't use ES at all).

What happens if you have a bug and your aggregate state saved doesn't match
the events you produced? I have watched huge numbers of people make this
same mistake. I have heard answers like 'we will write unit tests', tests
do not demonstrate a lack of bugs. You have two sources of truth in your
system and they will go out of sync.

Greg

On Wednesday, August 27, 2014, irium [email protected] wrote:

@damianh https://github.com/damianh Excuse me, but you didn't read what
I'm written. I don't need event sourcing, but need event storing.
From my point of view theese are different things. I looked for a general,
mature and stable event store framework. All of that I've found in
NEventStore. From all libs I've seen it seems the best choice. Event
sourcing is just an "addition" that goes for free with it. (Surely,
from my specific point of view). Maybe in some point of future I'll
continue to full "right" event sourcing and then all your words become true.

Again. Splitting single "big" transaction into multiple small ones - one
for every stream (aggregate root) - implies that I need to introduce many
"process managers" that should control every step transaction is completed
successfully. Yes, that's not a big deal, but consequent need to create,
implement and maintain huge number and complex "compensating" actions --
seems as over-complicating payment for what I don't really need.

I've read jbogard's and many others' good blogs, articles and so on. So I
know what I'm say about. Yet another circumstance - I'm under heavy time
limit pressure and afraid that implementing full ES in "right way" (with
all mentioned compensating actions logic) - will take much more time than I
have.

Again, reread the scenario I've described. There's no requirement of "use
event sourcing". I need just full domain events log to be able to import it
into another system (which in turn doesn't use ES at all).

you're trying to make things simple, but it's fundamentally a really bad
idea.

Heh, "keeping things simple" - is what software designs and architectures
all about, isn't it? Maybe I'm wrong and such way "will fail", but that
will be my own fall and good lesson :)

From all my experience (over 20 years in sw development) -
over-complicating things is much worse then "under-complicating".
Complication is always can be added later when it really needed, but
getting rid of unneeded things - is much harder.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@bartelink
Copy link

@irium But critically "Make things as simple as possible, but not simpler" (derived from "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.")

My point on @jbogard was not as an appeal-to-authority but as a beware the comfirmation bias

@jbogard
Copy link

jbogard commented Aug 27, 2014

Eh? The default starting mode of event sourcing is to write to your aggregate and run projections at the same time, in the same transaction. I made the mistake early on of assuming I needed asynchronous projections using messages and queues, and I had to fake it in the UI. Just make the projections synchronous

I'd also be a little bit careful about the term "eventual consistency" when applied to folks like Google, Amazon etc. Those papers talk about "eventual consistency" in CAP terms, not in terms of two entirely different datastores. EC in Riak for example, an AP system, is between nodes, and consistency in writes can be achieved via CRDTs.

All those blog posts came from me working on rescue projects of CQRS/ES systems where they had tried to shoehorn "async all the things", when in reality async projections were only needed for the read-heavy "front-end". Back-end admin was all sync. Easy peasy, and still CQRS.

@damianh
Copy link
Contributor Author

damianh commented Aug 27, 2014

Hi @jbogard , welcome.

I needed asynchronous projections using messages and queues, and I had to fake it in the UI. Just make the projections synchronous

... am assuming you mean things like NServiceBus, RabbitMQ, Masstransit?

You can do asynchronous projections all that stuff, it's just a separate thread in the same process reading from the eventstore. I do this all the time, and it's easy peasy too. etags and conditional GETs make it relatively simple to get the desired UI behavior.

@gregoryyoung
Copy link

I'll play captain obvious for a moment with all sync how exactly would you
rebuild a projection from events or for fun add a new one?

Bug causing stored aggregate state not to match up with event stream. How
will you detect/recover from this?

Also let's imagine this system takes say orders from clients. It makes
total sense that a bug in a projection (or a projection being rebuilt) for
a report will stop all orders from being processed in the system.

And when users want a cube for doing pivots?

Ok let's do one more for fun. What happens when you have more than one
thread making noncommunitive updates (aka dependent) via events and
updating the read model? Did the read model update in the same order as the
events were written (multiple streams to same projection)? What happens
when you replay?

It can be done. Not so simple.

I can keep going on... What people think of as simple misses huge numbers
of edge cases. Of course you could just say 'if we have bugs and our events
are wrong vs our aggregate state or vice versa it's a manual process. We
won't replay projections. And we will run on a single thread' however a
change to any of these will be not so simple.

What you are doing is taking on all the complexities of such systems and
receiving none of the value. At that point I would opt for just a basic db
backed system and write/read to/from the database.

Greg

http://www.theskyisnotyourlimit.com/wp-content/uploads/2012/05/meerkat.jpg

On Wednesday, August 27, 2014, Damian Hickey [email protected]
wrote:

Hi @jbogard https://github.com/jbogard , welcome.

I needed asynchronous projections using messages and queues, and I had to
fake it in the UI. Just make the projections synchronous

... am assuming you mean things like NServiceBus, RabbitMQ, Masstransit?

You can do asynchronous projections all that stuff, it's just a separate
thread in the same process reading from the eventstore. I do this all the
time, and it's easy peasy too. etags and conditional GETs make it
relatively simple to get the desired UI behavior.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@damianh
Copy link
Contributor Author

damianh commented Aug 27, 2014

+1 Greg
On 27 Aug 2014 18:15, "Greg Young" [email protected] wrote:

I'll play captain obvious for a moment with all sync how exactly would you
rebuild a projection from events or for fun add a new one?

Bug causing stored aggregate state not to match up with event stream. How
will you detect/recover from this?

Also let's imagine this system takes say orders from clients. It makes
total sense that a bug in a projection (or a projection being rebuilt) for
a report will stop all orders from being processed in the system.

And when users want a cube for doing pivots?

Ok let's do one more for fun. What happens when you have more than one
thread making noncommunitive updates (aka dependent) via events and
updating the read model? Did the read model update in the same order as
the
events were written (multiple streams to same projection)? What happens
when you replay?

It can be done. Not so simple.

I can keep going on... What people think of as simple misses huge numbers
of edge cases. Of course you could just say 'if we have bugs and our
events
are wrong vs our aggregate state or vice versa it's a manual process. We
won't replay projections. And we will run on a single thread' however a
change to any of these will be not so simple.

What you are doing is taking on all the complexities of such systems and
receiving none of the value. At that point I would opt for just a basic db
backed system and write/read to/from the database.

Greg

http://www.theskyisnotyourlimit.com/wp-content/uploads/2012/05/meerkat.jpg

On Wednesday, August 27, 2014, Damian Hickey [email protected]
wrote:

Hi @jbogard https://github.com/jbogard , welcome.

I needed asynchronous projections using messages and queues, and I had
to
fake it in the UI. Just make the projections synchronous

... am assuming you mean things like NServiceBus, RabbitMQ, Masstransit?

You can do asynchronous projections all that stuff, it's just a separate
thread in the same process reading from the eventstore. I do this all
the
time, and it's easy peasy too. etags and conditional GETs make it
relatively simple to get the desired UI behavior.


Reply to this email directly or view it on GitHub
<
https://github.com/NEventStore/NEventStore/issues/287#issuecomment-53582358>

.

Studying for the Turing test


Reply to this email directly or view it on GitHub
#287 (comment)
.

@jbogard
Copy link

jbogard commented Aug 27, 2014

@gregoryyoung That last solution is exactly what I recommended. If the expectation is sync, within a bounded context, then choose a solution that best fits that paradigm. Otherwise, it worked well for these clients to block on specific projections and wait for those to complete before finishing a request (that's what I meant by "make it sync by default"). That eliminated bug reports of "I approved the invoice and went back to the list of unapproved invoices and it was still there."

@damianh Those are the same solutions from the document databases with exclusively secondary indexes. Just use an etag, conditional GET etc. Then you get solutions like this http://octopusdeploy.com/blog/perceptual-consistency-in-ravendb which is not unlike an "opt-in for sync" for projections you KNOW are built for the very next screen. Also not unlike Azure DocumentDb's consistency level choices.

@gregoryyoung
Copy link

"@gregoryyoung https://github.com/gregoryyoung That last solution is
exactly what I recommended. If the expectation is sync, within a bounded
context, then choose a solution that best fits that paradigm. Otherwise, it
worked well for these clients to block on specific projections and wait
for those to complete before finishing a request (that's what I meant by
"make it sync by default"). That eliminated bug reports of "I approved the
invoice and went back to the list of unapproved invoices and it was still
there.""

Without the ability to replay a projection or to have events as your book
of record you should just give up on events you have lost almost all value.
Just do it at the db level (use an ORM if you want then just use
materialized views or direct query your db for reads). The problem is doing
events with this as well (it becomes more complex not less)

On Wed, Aug 27, 2014 at 12:58 PM, Jimmy Bogard [email protected]
wrote:

@gregoryyoung https://github.com/gregoryyoung That last solution is
exactly what I recommended. If the expectation is sync, within a bounded
context, then choose a solution that best fits that paradigm. Otherwise, it
worked well for these clients to block on specific projections and wait
for those to complete before finishing a request (that's what I meant by
"make it sync by default"). That eliminated bug reports of "I approved the
invoice and went back to the list of unapproved invoices and it was still
there."

@damianh https://github.com/damianh Those are the same solutions from
the document databases with exclusively secondary indexes. Just use an
etag, conditional GET etc. Then you get solutions like this
http://octopusdeploy.com/blog/perceptual-consistency-in-ravendb which is
not unlike an "opt-in for sync" for projections you KNOW are built for the
very next screen. Also not unlike Azure DocumentDb's consistency level
choices.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@jbogard
Copy link

jbogard commented Aug 27, 2014

Why do you think we didn't have the ability to replay a projection? We did this quite frequently as screens were tweaked. Events were still the system of record, we just eliminated need for back-end admins to hit F5 (or spinner GIFs by dispatching events to projections immediately, initially for all projections and eventually for targeted ones.

Those projections weren't prevented from being async, we just dispatched immediately and await'd "done", for those events/projections. That was the default mode until we hit projections that weren't used in the system that originated the commands - star schema for OLAP/legacy etc.

@gregoryyoung
Copy link

Then you are discussing something very different than in this thread.

"Why do you think we didn't have the ability to replay a projection?" you
had said you went with the last option interpreted that as the option of
never replay projections, only use one thread, etc etc.

"Those projections weren't prevented from being async, we just dispatched
immediately and await'd "done", for those events/projections. "
This is not synchronous and transactional as well. This is asynchronous +
wait for some reasonable period of time. If you were rebuilding a
projection my guess is you fell to eventual consistency as opposed to
bombing out the original write no?

On Wed, Aug 27, 2014 at 2:16 PM, Jimmy Bogard [email protected]
wrote:

Why do you think we didn't have the ability to replay a projection? We did
this quite frequently as screens were tweaked. Events were still the system
of record, we just eliminated need for back-end admins to hit F5 (or
spinner GIFs by dispatching events to projections immediately, initially
for all projections and eventually for targeted ones.

Those projections weren't prevented from being async, we just dispatched
immediately and await'd "done", for those events/projections. That was the
default mode until we hit projections that weren't used in the system that
originated the commands - star schema for OLAP/legacy etc.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@jbogard
Copy link

jbogard commented Aug 27, 2014

Ah, no last option being "just using an f'ing database".

The progression for intra-app projections was:

  1. Synchronous & transactional
  2. Synchronous and not transactional
  3. Dispatched async and awaited

Number 1 covered a vast majority of cases. Number 2 hit the next 15%, then 3 was the last 5%. This tracked well to an existing system having say, 100 or so tables but only about 20 or so being the core business. That's what I typically see - ES replacing an existing DB all the things system, but ES applied across the whole system rather than selectively to aggregates that could truly benefit.

@gregoryyoung
Copy link

Ah ok :)

And yes its quite common to see people try to replace one monolith with
another.

On Wed, Aug 27, 2014 at 2:36 PM, Jimmy Bogard [email protected]
wrote:

Ah, no last option being "just using an f'ing database".

The progression for intra-app projections was:

  1. Synchronous & transactional
  2. Synchronous and not transactional
  3. Dispatched async and awaited

Number 1 covered a vast majority of cases. Number 2 hit the next 15%, then
3 was the last 5%. This tracked well to an existing system having say, 100
or so tables but only about 20 or so being the core business. That's what I
typically see - ES replacing an existing DB all the things system, but ES
applied across the whole system rather than selectively to aggregates that
could truly benefit.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@dennisdoomen
Copy link

Just out of curiousity, what other improvements does v6 introduce that you think people currently running v5 (and relying on that transaction support for historical reasons) will need? In other words, what are they going to miss if they have to stick with v5 for the coming years just because they can't upgrade to v6?

@larsw
Copy link

larsw commented Aug 28, 2014

@dennisdoomen filter the issues on the v6.0.0 tag.

@bartelink
Copy link

@dennisdoomen Async is the biggie which I assume is no dealbreaker for that kind of system (esp given that sync equivalents will contemporaneously bite the dust)

@jbogard Thanks for the extra detail; @irium see what I mean by watching out for confirmation bias wrt Jimmy's posts?

@jbogard Can you expand a little on the details of how exactly you ensure that the Projection(s) have indeed completed their work in the Dispatched async and awaited scheme ? i.e. Do you e.g. watch for a specific event to pass through or e.g. wait for it's queue to drain etc. ? (Or have you blogged this and I've missed it?)

It's been a while since you blogged on this topic and to be honest I think your clarification here is blogworthy as I think it's pretty clear that people (we have an exhibit :P) are drawing inferences and Thought Leaders bear Attendant Responsibilities :D

@jbogard
Copy link

jbogard commented Aug 28, 2014

@bartelink No, I hadn't blogged about it.

In most ES systems I've encountered/built, there are 2 kinds of projections - local projections and remote projections. Local projections = projections run for commands originating from this local application, and remote projections where the projections are used outside the originating command's application. Classic example is back-end admin system and front-end, read-only system. Ecommerce product catalog for example.

Within a local application, the (rightful) expectation is immediate consistency. You make a change, and you expect to see the changes immediately, throughout the application.

Remote projections, or projections where there is no reasonable expectation of immediacy, were done in an offline manner, separate thread/service, and no communication/notification to the originating command's app that the event has been dispatched successfully.

Local projections were done through a special dispatcher that only knows about projections local to that app. At that point it becomes a decision point - do you make the local dispatcher transactional, sync but separate transaction, or async/awaited.

This sort of problem comes up all the time on the DDD/CQRS list from folks who aren't building CEP systems, but trying to build business apps using event sourcing. There's a rightful expectation of local consistency from the user. The same questions/concerns come up on the RavenDB channels (SO, Jabbr, google group).

Rather than wave the problem away without details, I think this sort of thing should be documented and easily exposed, like RavenDB does, because for business apps that use event sourcing, immediate consistency for projections in some form or fashion is almost always needed.

@gregoryyoung
Copy link

Be careful on wording here:

"Rather than wave the problem away without details, I think this sort of
thing should be documented and easily exposed, like RavenDB does, because
for business apps that use event sourcing, immediate consistency for
projections in some form or fashion is almost always needed."

My understanding is that you did not provide immediate consistency but
blocked on an async operation. Can you clarify?

Greg

On Thu, Aug 28, 2014 at 9:22 AM, Jimmy Bogard [email protected]
wrote:

@bartelink https://github.com/bartelink No, I hadn't blogged about it.

In most ES systems I've encountered/built, there are 2 kinds of
projections - local projections and remote projections. Local projections =
projections run for commands originating from this local application, and
remote projections where the projections are used outside the originating
command's application. Classic example is back-end admin system and
front-end, read-only system. Ecommerce product catalog for example.

Within a local application, the (rightful) expectation is immediate
consistency. You make a change, and you expect to see the changes
immediately, throughout the application.

Remote projections, or projections where there is no reasonable
expectation of immediacy, were done in an offline manner, separate
thread/service, and no communication/notification to the originating
command's app that the event has been dispatched successfully.

Local projections were done through a special dispatcher that only knows
about projections local to that app. At that point it becomes a decision
point - do you make the local dispatcher transactional, sync but separate
transaction, or async/awaited.

This sort of problem comes up all the time on the DDD/CQRS list from folks
who aren't building CEP systems, but trying to build business apps using
event sourcing. There's a rightful expectation of local consistency from
the user. The same questions/concerns come up on the RavenDB channels (SO,
Jabbr, google group).

Rather than wave the problem away without details, I think this sort of
thing should be documented and easily exposed, like RavenDB does, because
for business apps that use event sourcing, immediate consistency for
projections in some form or fashion is almost always needed.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@jbogard
Copy link

jbogard commented Aug 28, 2014

For a small percentage of local projections, we blocked on an async operation. This was easier than putting some other solutions I've seen/used, like websockets/long polling to refresh the page or display a toast notification or updating a local/client-side view model etc.

@bartelink
Copy link

@jbogard That's as good as a blog post for me (esp with @gregoryyoung 's clarification).

All you need to do now is link to here from http://lostechies.com/jimmybogard/2012/08/22/busting-some-cqrs-myths/ ; I suspect the sentence

That is, your read store can be updated when your command side succeeds (in the same transaction).

could be misleading to some (in the same way that your first post in this thread can be [mis]read as you advocating that this (some people's erroenous desire to bolt it all together in a nice transaction to "get rid of that 'eventual consistency crap'") is a cut and dried default practice if one is seeking to confirm that one has stumbled on the optimal subset of CQRS-ES that nobody else seems to have realized for some insane reason)

While I personally was not confused by it (your post is an excellent summary of ideas in that space and helped me a lot (in conjunction with CQRS and user experience), I think expanding that out would be helpful.

@EsbenSkovPedersen
Copy link

I understand writing to multiple streams are bad in most circumstances, but are there not exceptions which doesn't involve transactions?

One example which seems useful to me is an example we discussed at a ddd workshop with Vaugh Vernon. Say an aggregate A1 is creating another aggregate A2, but A1 still needs to track the newest instance of A2. Since this new aggregate A2 is entirely new and the id is generated by Guid.NewGuid there would never be a conflict(We agree that Guids don't collide, right?)

This is a useful scenario which would be a shame to kill in the future.

@gregoryyoung
Copy link

"A2 is entirely new and the id is generated by Guid.NewGuid there would
never be a conflict(We agree that Guids don't collide, right?)"

Another server is doing the same... a2 fails.

Distributed: the server for a2 is down.

The server for a2 ran out of disk.

Quite a few things that can fail. That said you can pretty easily do such
cases without transactions (pretty basic compensation) write a1 if
it succeeds then write a2. In the tiny % that fail on a2 (none likely on
local db) just let someone know it happened.

On Mon, Sep 22, 2014 at 3:10 PM, Esben Skov Pedersen <
[email protected]> wrote:

I understand writing to multiple streams are bad in most circumstances,
but are there not exceptions which doesn't involve transactions?

One example which seems useful to me is an example we discussed at a ddd
workshop with Vaugh Vernon. Say an aggregate A1 is creating another
aggregate A2, but still needs to track the newest instance of A2. Since
this new aggregate A2 is entirely new and the id is generated by
Guid.NewGuid there would never be a conflict(We agree that Guids don't
collide, right?)

This is a useful scenario which would be a shame to kill in the future.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

@gregoryyoung
Copy link

btw DTC in this "tiny % that fail" will also just let someone know (via
email/log message) e.g. if you get a network partition during the commit
phase.

On Mon, Sep 22, 2014 at 3:14 PM, Greg Young [email protected] wrote:

"A2 is entirely new and the id is generated by Guid.NewGuid there would
never be a conflict(We agree that Guids don't collide, right?)"

Another server is doing the same... a2 fails.

Distributed: the server for a2 is down.

The server for a2 ran out of disk.

Quite a few things that can fail. That said you can pretty easily do such
cases without transactions (pretty basic compensation) write a1 if
it succeeds then write a2. In the tiny % that fail on a2 (none likely on
local db) just let someone know it happened.

On Mon, Sep 22, 2014 at 3:10 PM, Esben Skov Pedersen <
[email protected]> wrote:

I understand writing to multiple streams are bad in most circumstances,
but are there not exceptions which doesn't involve transactions?

One example which seems useful to me is an example we discussed at a ddd
workshop with Vaugh Vernon. Say an aggregate A1 is creating another
aggregate A2, but still needs to track the newest instance of A2. Since
this new aggregate A2 is entirely new and the id is generated by
Guid.NewGuid there would never be a conflict(We agree that Guids don't
collide, right?)

This is a useful scenario which would be a shame to kill in the future.


Reply to this email directly or view it on GitHub
#287 (comment)
.

Studying for the Turing test

Studying for the Turing test

@thefringeninja
Copy link
Contributor

Was the DDD workshop based around event sourcing or 'regular' DDD? If you are using NEventStore just embrace eventual consistency.

@EsbenSkovPedersen
Copy link

@gregoryyoung Seems reasonable to just run it in two transactions. We have not actually needed to modify two streams yet. Even after several months of development.

@EsbenSkovPedersen
Copy link

@thefringeninja It was a regular DDD workshop with event sourcing offered as an alternative persistent strategy.

@damianh
Copy link
Contributor Author

damianh commented Sep 23, 2014

You can still update more than one stream, just not in a transaction. If
you think you need to update more than one stream in a transaction you
should reconsider your model - you likely have consistency boundary
problems. Keeping this feature allows (encourages even) the developer to
not address such issues.

It's also leaky as not all storage engines support DTC. And as Greg says,
DTC works... until it doesn't, so you would need to deal with it anyway.

Supporting transactions /DTC also encourages the developer to shirk
considering idempotency. Philosophically, I'm really happy that we
encourage that.
On 22 Sep 2014 16:10, "Esben Skov Pedersen" [email protected]
wrote:

I understand writing to multiple streams are bad in most circumstances,
but are there not exceptions which doesn't involve transactions?

One example which seems useful to me is an example we discussed at a ddd
workshop with Vaugh Vernon. Say an aggregate A1 is creating another
aggregate A2, but still needs to track the newest instance of A2. Since
this new aggregate A2 is entirely new and the id is generated by
Guid.NewGuid there would never be a conflict(We agree that Guids don't
collide, right?)

This is a useful scenario which would be a shame to kill in the future.


Reply to this email directly or view it on GitHub
#287 (comment)
.

@damianh damianh removed their assignment May 4, 2015
@AGiorgetti AGiorgetti removed this from the v6.0.0 milestone Jul 12, 2016
@Ryan-Palmer
Copy link

Ryan-Palmer commented Jul 12, 2019

I guess this change didn't actually happen and probably won't now that the project is pretty static but as a talking point, I think I might have a valid use case for writing to more than one stream in a transaction and would value any comments.

I am building a mobile application which will work offline and sync with an EventStore server when it reconnects. To do this I am storing all of the pending commands as well as events, so after pulling new data I can re-run the local behaviour and resolve any conflicts.

The best way I could see to do this was to use the NES buckets to partition my store into pending commands, pending events and pulled events. This is my first go at event sourcing so I may be off the mark somewhat.

(Edit: I have just had a bit of an epiphany and I think I can just have two partitions, downloaded events and pending commands, so I treat everything as a simulation until it is synced with the server. Trying to keep three event partitions in sync was starting to look overly complex)

This approach means I often need to write to multiple partitions for the same aggregate.

For instance when a batch of pending events are generated, I store them and their associated command using the same stream ID but in separate buckets, and I want this to be transactional.

I also keep an additional partition ('combined') which has all of the pulled and pending events in one place. I intend to use it for my projections as it provides one checkpointed set of commits.

This needs to be appended whenever a new pending event is commited, and also has to be completely rebuilt after a merge by iterating over both the pulled and pending partitions. This rebuild should also be transactional, but it has to write to every aggregate in the combined stream.

Finally, although this affects a single stream, as part of the above merge processes I need to delete / recreate entire streams in the pending or combined partitions. As (understandably) you can't remove events from an opened stream, I have to use the Advanced.DeleteStream method and then recreate the stream afterwards. Again, if the create fails I want the delete to be rolled back.

Does any/all of this seem like a viable plan? I totally get that committing to multiple aggregates in a transaction shows that you probably have an opportunity to improve your model, but I am being careful not to do that other than in the rebuild case.

@AGiorgetti
Copy link
Member

Transaction support has been overhauled in the current NEventStore 6.1 and NEventStore.Persistence.Sql 7.x.

A TransactionScope with TransactionScopeOption.Suppress that surrounds any operation will not be created by default anymore (you need to enable the old behavior explicitly calling a configuration function: take a look at the changelogs).

The only behavior left for Microsoft SQL server is: if there's no active transaction (no TransactionScope already created by the user) a privare SQL transaction with ReadCommitted will be created for each operation.

With the new persistence provider the transaction management is totally up to the user.

Some tests were added to show supported and unsupported scenarios (take a look the Microsoft Sql Server persistence tests).

The old behavior can be enabled during NEventStore configuration.

@Ryan-Palmer
Copy link

Hello, thanks for that info.

I previously gathered from the docs that I needed to call the 'EnlistInAmbientTransaction' wireup, and then surround my NEventStore access with a using block which is scoped around a new Transaction. That transaction needs to be manually configured as Required scope and with ReadConfigured isolation / max timeout options.

I am using Sqlite on a mobile device.

This should be all I need to do shouldn't it?

I was also interested in comments on whether these are valid uses case for multi-stream transactions (same aggregate, many partitions etc) .

Cheers!

@AGiorgetti
Copy link
Member

Writing to multiple streams in a transaction is still a: "do it at your own risk" scenario.

All you need to do with the new default is:

1- open a transaction scope
2- write to as many streams you want
3- complete the scope (or dispose it)

Be careful if you write from multiple threads or if DTC kicks in (because you are mixing writes from multiple connections), it might or might not work depending on many factors.

@AGiorgetti AGiorgetti added this to the 7.0.0 milestone Jul 12, 2019
@Ryan-Palmer
Copy link

Ryan-Palmer commented Jul 12, 2019

I have all my writes queued up on a single thread using an F# mailbox processor so they can't tread on each other's toes, that has always worked well for me.

Are you saying that you no longer need to do the ambient transaction wireup, don't need to set the transaction's scope to Required or it's IsolationLevel to ReadConfigured? This is all unnecessary now? I literally just need to create a vanilla transaction scope?

@Ryan-Palmer
Copy link

Ryan-Palmer commented Jul 12, 2019

Just to clarify, it is F# but this is how I am using the connection (the command is a func that takes an event store connection and uses it for whatever access it needs):

let writeConnectionManager(eventStore : IStoreEvents) =
        MailboxProcessor<EventStoreCommandMessage>.Start (fun inbox ->
            
            let getTxScope () = 
                let mutable txOptions = new TransactionOptions()
                txOptions.IsolationLevel <- IsolationLevel.ReadCommitted
                txOptions.Timeout <- TransactionManager.MaximumTimeout
                new TransactionScope(TransactionScopeOption.Required, txOptions) 
            
            let rec innerLoop eventStore = async {

                let! (command, replyChannel) = inbox.Receive()

                use tx = getTxScope()
                let result = eventStore |> command
                tx.Complete()

                replyChannel.Reply result
                    
                do! innerLoop eventStore
            }
            innerLoop eventStore)

I haven't actually used .Net BCL Transaction stuff before, our apps have all used Sqlite traditionally which just has a 'run in transaction' method so this is all a bit new to me (in addition to learning event sourcing and the NES library and the EventStore server!) .

@bartelink
Copy link

If you're in F# land, you might be interested in peeking at https://github.com/jet/propulsion - in there, the general pattern I use is to read in batches and then concurrently+idempotently process the inputs. As progress is achieved, checkpoints gets stored in the checkpoint store. The missing bit is that there's no way to store state in ES implemented

While its unlikely to be directly useful for you, the point for me is that tying stuff together with transactions is just a big dead end IME - instead operating idempotently where any give request can be retries safely will give better throughput, resilience and understandability without leaning on transactions. Of course this does imply more eventual consistency that one might argue/feel/hope for when reaching for transactions as your hammer.

(Dont want to hijack this thread; if interested in to/fro regarding the lib or the concepts, I'm on vacation but drop into the DDD-CQRS-ES slack's #equinox channel evenings). Also for talking through these sorts of architectural concepts, the DDD-CQRS-ES lib can be very useful.

A final point re MBP - in general, my experience is that stuffing things into an Agent queue is unbounded and hard to troubleshoot. While it all 'just actors', the approach of idempotently reacting on a more pull basis and hence having intrinsic backpressure pretty much rules out agents (tough propulsion can be viewed as employing a chain of cooperating agents too)

@Ryan-Palmer
Copy link

Ryan-Palmer commented Jul 12, 2019

Hey man, thanks for that and I will check out the links :)

In my case, the reason I am trying to use transactions is to write one aggregate across multiple partitions (as detailed earlier), and also to ensure that deletes are rolled back if a subsequent create fails when moving streams between partitions or replacing them, both of which are I guess quite unusual.

It is because the mobile NES instance isn't my source of truth, the EventStore server is, so NES needs to be partitioned / merged and also re-written often.

I heed your warning about the mailbox processor, but I think as the NES connection is running locally on the device, serving one user, the queue shouldn't get too long as the writes will be very low impact. I wouldn't consider doing this on a server that is on a network etc.

Anyway, you have given me some stuff to chew on, thanks!

@AGiorgetti
Copy link
Member

@RyanBuzzInteractive the code you posted should work, as long as anything that happens inside "command" is single threaded.
You can however write a test to confirm the scenario works for you; it if does not, file an issue with the failing test in the NEventStore.Persistence.SQL project.

@Ryan-Palmer
Copy link

Ryan-Palmer commented Jul 12, 2019

Thanks :) I'm still not clear though, are you saying all of that set up on the TransactionScope object is or isn't necessary? You seemed to imply it was unnecessary earlier.

@AGiorgetti
Copy link
Member

It is still necessary, also if you plan to use async/await you'll need to add TransactionScopeAsyncFlowOption.Enabled to the mix too.

@Ryan-Palmer
Copy link

Ok cool, thanks for the clarification :)

@AGiorgetti
Copy link
Member

This whole discussion is actually related to the transaction support in the NEventStore.Persistence.XXX implementations (specifically the SQL one).

With the new Version 7x the defaults were changed: NEventStore now do not creates or suppress new transactions anymore, everything is left to the user.

Writing to more that one stream per transaction is still an officially unsupported scenario: do it at your own risk because you know what you are doing!.

You can revert to the old NEventStore behavior when configuring the store itself (see the changelogs and release notes).

I'm going to close this thread, feel free to reopen it if new considerations arise.

@gaevoy
Copy link

gaevoy commented Mar 19, 2021

I experimented a bit to took NEventStore in the role of transactional outbox and it turned out working well https://gaevoy.com/2021/03/18/audit-log-via-transactional-outbox.html

It is a good excuse of using transactions? 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests