Taskcompletionsource Non Blocking Assignment

As your app is a console app, it runs on the default synchronization context, where the continuation callback will be called on the same thread the awaiting task has become completed on. If you want to switch threads after , you can do so with :

You could further improve this by storing inside and comparing it to the current thread's id after the . If you're still on the same thread, do .

While I understand that is a simplified version of your actual code, it's still completely synchronous inside (the way you showed it in your question). Why would you expect any thread switch in there?

Anyway, you probably should redesign your logic the way it doesn't make assumptions about what thread you are currently on. Avoid mixing and and make all of your code asynchronous. Usually, it's possible to stick with just one somewhere on the top level (e.g. inside ).

[EDITED] Calling from actually transfers the control flow to the point where you on the - without a thread switch, because of the default synchronization context's behavior. So, your code which does the actual message processing is taking over the thread. Eventually, is called on the same thread, causing the deadlock.

Below is a console app code, modeled after your sample. It uses inside to schedule the continuation on a separate thread, so the control flow returns to and there's no deadlock.

This is not much different from doing inside . The only advantage I can think of is that you have an explicit control over when to switch threads. This way, you can stay on the same thread for as long as possible (e.g., for , , , but you still need another thread switch after to avoid a deadlock on ).

Both solutions would eventually make the thread pool grow, which is bad in terms of performance and scalability.

Now, if we replace with everywhere inside in the above code, we will not have to use and there still will be no deadlocks. However, the whole chain of calls after the 1st inside will actually be executed on the thread. As long as we don't block this thread with other -style calls and don't do a lot of CPU-bound work as we're processing messages, this approach might work OK (asynchronous IO-bound -style calls still should be OK, and they may actually trigger an implicit thread switch).

That said, I think you'd need a separate thread with a serializing synchronization context installed on it for processing messages (similar to ). That's where your asynchronous code containing should run. You'd still need to avoid using on that thread. And if an individual message processing takes a lot of CPU-bound work, you should use for such work. For async IO-bound calls, you could stay on the same thread.

You may want to look at / from @StephenCleary's Nito Asynchronous Library for your asynchronous message processing logic. Hopefully, Stephen jumps in and provides a better answer.

Blocking vs. Nonblocking in Verilog

The concept of Blocking vs. Nonblocking signal assignments is a unique one to hardware description languages. The main reason to use either Blocking or Nonblocking assignments is to generate either combinational or sequential logic. In software, all assignments work one at a time. So for example in the C code below:

LED_on = 0; count = count + 1; LED_on = 1;

The second line is only allowed to be executed once the first line is complete. Although you probably didn't know it, this is an example of a blocking assignment. One assignment blocks the next from executing until it is done. In a hardware description language such as Verilog there is logic that can execute concurrently or at the same time as opposed to one-line-at-a-time and there needs to be a way to tell which logic is which.

<=     Nonblocking Assignment

=      Blocking Assignment   


always @(posedge i_clock) begin r_Test_1 <= 1'b1; r_Test_2 <= r_Test_1; r_Test_3 <= r_Test_2; end

The always block in the Verilog code above uses the Nonblocking Assignment, which means that it will take 3 clock cycles for the value 1 to propagate from r_Test_1 to r_Test_3. Now consider this code:

always @(posedge i_clock) begin r_Test_1 = 1'b1; r_Test_2 = r_Test_1; r_Test_3 = r_Test_2; end

See the difference? In the always block above, the Blocking Assignment is used. In this example, the value 1 will immediately propagate to r_Test_3. The Blocking assignment immediately takes the value in the right-hand-side and assigns it to the left hand side. Here's a good rule of thumb for Verilog:

In Verilog, if you want to create sequential logic use a clocked always block with Nonblocking assignments. If you want to create combinational logic use an always block with Blocking assignments. Try not to mix the two in the same always block.

Nonblocking and Blocking Assignments can be mixed in the same always block. However you must be careful when doing this! It's actually up to the synthesis tools to determine whether a blocking assignment within a clocked always block will infer a Flip-Flop or not. If it is possible that the signal will be read before being assigned, the tools will infer sequential logic. If not, then the tools will generate combinational logic. For this reason it's best just to separate your combinational and sequential code as much as possible.

One last point: you should also understand the semantics of Verilog. When talking about Blocking and Nonblocking Assignments we are referring to Assignments that are exclusively used in Procedures (always, initial, task, function). You are only allowed to assign the reg data type in procedures. This is different from a Continuous Assignment. Continuous Assignments are everything that's not a Procedure, and only allow for updating the wire data type.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *