/usr/share/doc/mlton/guide/ConcurrentMLImplementation is in mlton-doc 20100608-5.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta name="robots" content="index,nofollow">
<title>ConcurrentMLImplementation - MLton Standard ML Compiler (SML Compiler)</title>
<link rel="stylesheet" type="text/css" charset="iso-8859-1" media="all" href="common.css">
<link rel="stylesheet" type="text/css" charset="iso-8859-1" media="screen" href="screen.css">
<link rel="stylesheet" type="text/css" charset="iso-8859-1" media="print" href="print.css">
<link rel="Start" href="Home">
</head>
<body lang="en" dir="ltr">
<script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-833377-1";
urchinTracker();
</script>
<table bgcolor = lightblue cellspacing = 0 style = "border: 0px;" width = 100%>
<tr>
<td style = "
border: 0px;
color: darkblue;
font-size: 150%;
text-align: left;">
<a class = mltona href="Home">MLton MLTONWIKIVERSION</a>
<td style = "
border: 0px;
font-size: 150%;
text-align: center;
width: 50%;">
ConcurrentMLImplementation
<td style = "
border: 0px;
text-align: right;">
<table cellspacing = 0 style = "border: 0px">
<tr style = "vertical-align: middle;">
</table>
<tr style = "background-color: white;">
<td colspan = 3
style = "
border: 0px;
font-size:70%;
text-align: right;">
<a href = "Home">Home</a>
<a href = "TitleIndex">Index</a>
</table>
<div id="content" lang="en" dir="ltr">
Here are some notes on MLton's implementation of <a href="ConcurrentML">ConcurrentML</a>. <p>
Concurrent ML was originally implemented for SML/NJ. It was ported to MLton in the summer of 2004. The main difference between the implementations is that SML/NJ uses continuations to implement CML threads, while MLton uses its underlying <a href="MLtonThread">thread</a> package. Presently, MLton's threads are a little more heavyweight than SML/NJ's continuations, but it's pretty clear that there is some fat there that could be trimmed.
</p>
<p>
The implementation of CML in SML/NJ is built upon the first-class continuations of the <tt>SMLofNJ.Cont</tt> module.
<pre class=code>
<B><FONT COLOR="#A020F0">type</FONT></B><B><FONT COLOR="#228B22"> 'a cont
</FONT></B><B><FONT COLOR="#A020F0">val</FONT></B> callcc: ('a cont -> 'a) -> 'a
<B><FONT COLOR="#A020F0">val</FONT></B> isolate: ('a -> unit) -> 'a cont
<B><FONT COLOR="#A020F0">val</FONT></B> throw: 'a cont -> 'a -> 'b
</PRE>
</p>
<p>
The implementation of CML in MLton is built upon the first-class threads of the <a href="MLtonThread">MLtonThread</a> module.
<pre class=code>
<B><FONT COLOR="#A020F0">type</FONT></B><B><FONT COLOR="#228B22"> 'a t
</FONT></B><B><FONT COLOR="#A020F0">val</FONT></B> new: ('a -> unit) -> 'a t
<B><FONT COLOR="#A020F0">val</FONT></B> prepare: 'a t * 'a -> Runnable.t
<B><FONT COLOR="#A020F0">val</FONT></B> switch: ('a t -> Runnable.t) -> 'a
</PRE>
</p>
<p>
The port is relatively straightforward, because CML always throws to a continuation at most once. Hence, an "abstract" implementation of CML could be built upon first-class one-shot continuations, which map equally well to SML/NJ's continuations and MLton's threads.
</p>
<p>
The "essence" of the port is to transform:
<pre>callcc (fn k => ... throw k' v')
</pre>to
<pre>switch (fn t => ... prepare (t', v'))
</pre>which suffices for the vast majority of the CML implementation.
</p>
<p>
There was only one complicated transformation: blocking multiple base events. In SML/NJ CML, the representation of base events is given by:
<pre class=code>
<B><FONT COLOR="#A020F0">datatype</FONT></B><B><FONT COLOR="#228B22"> 'a event_status
</FONT></B>=<B><FONT COLOR="#228B22"> <FONT COLOR="#B8860B">ENABLED</FONT> <B><FONT COLOR="#A020F0">of</FONT></B> {prio: int, doFn: unit -> 'a}
</FONT></B>|<B><FONT COLOR="#228B22"> <FONT COLOR="#B8860B">BLOCKED</FONT> <B><FONT COLOR="#A020F0">of</FONT></B> {
transId: trans_id ref,
cleanUp: unit -> unit,
next: unit -> unit
} -> 'a
</FONT></B><B><FONT COLOR="#A020F0">type</FONT></B><B><FONT COLOR="#228B22"> 'a base_evt </FONT></B>=<B><FONT COLOR="#228B22"> unit -> 'a event_status
</FONT></B></PRE>
</p>
<p>
When synchronizing on a set of base events, which are all blocked, we must invoke each <tt>BLOCKED</tt> function with the same <tt>transId</tt> and <tt>cleanUp</tt> (the <tt>transId</tt> is (checked and) set to <tt>CANCEL</tt> by the <tt>cleanUp</tt> function, which is invoked by the first enabled event; this "fizzles" every other event in the synchronization group that later becomes enabled). However, each <tt>BLOCKED</tt> function is implemented by a callcc, so that when the event is enabled, it throws back to the point of synchronization. Hence, the next function (which doesn't return) is invoked by the <tt>BLOCKED</tt> function to escape the callcc and continue in the thread performing the synchronization. In SML/NJ this is implemented as follows:
<pre class=code>
<B><FONT COLOR="#A020F0">fun</FONT></B> ext ([], blockFns) = callcc (<B><FONT COLOR="#A020F0">fn</FONT></B> k => <B><FONT COLOR="#0000FF">let</FONT></B>
<B><FONT COLOR="#A020F0">val</FONT></B> throw = throw k
<B><FONT COLOR="#A020F0">val</FONT></B> (transId, setFlg) = mkFlg()
<B><FONT COLOR="#A020F0">fun</FONT></B> log [] = S.atomicDispatch ()
| log (blockFn:: r) =
throw (blockFn {
transId = transId,
cleanUp = setFlg,
next = <B><FONT COLOR="#A020F0">fn</FONT></B> () => log r
})
<B><FONT COLOR="#0000FF">in</FONT></B>
log blockFns; error <B><FONT COLOR="#BC8F8F">"[log]"</FONT></B>
<B><FONT COLOR="#0000FF">end</FONT></B>)
</PRE>
(Note that <tt>S.atomicDispatch</tt> invokes the continuation of the next continuation on the ready queue.) This doesn't map well to the MLton thread model. Although it follows the
<pre>callcc (fn k => ... throw k v)
</pre>model, the fact that blockFn will also attempt to do
<pre>callcc (fn k' => ... next ())
</pre>means that the naive transformation will result in nested switch-es.
</p>
<p>
We need to think a little more about what this code is trying to do. Essentially, each <tt>blockFn</tt> wants to capture this continuation, hold on to it until the event is enabled, and continue with next; when the event is enabled, before invoking the continuation and returning to the synchronization point, the <tt>cleanUp</tt> and other event specific operations are performed.
</p>
<p>
To accomplish the same effect in the MLton thread implementation, we have the following:
<pre class=code>
<B><FONT COLOR="#A020F0">datatype</FONT></B><B><FONT COLOR="#228B22"> 'a status </FONT></B>=<B><FONT COLOR="#228B22">
<FONT COLOR="#B8860B">ENABLED</FONT> <B><FONT COLOR="#A020F0">of</FONT></B> {prio: int, doitFn: unit -> 'a}
</FONT></B>|<B><FONT COLOR="#228B22"> <FONT COLOR="#B8860B">BLOCKED</FONT> <B><FONT COLOR="#A020F0">of</FONT></B> {transId: trans_id,
cleanUp: unit -> unit,
next: unit -> rdy_thread} -> 'a
</FONT></B><B><FONT COLOR="#A020F0">type</FONT></B><B><FONT COLOR="#228B22"> 'a base </FONT></B>=<B><FONT COLOR="#228B22"> unit -> 'a status
</FONT></B><B><FONT COLOR="#A020F0">fun</FONT></B> ext ([], blockFns): 'a =
S.atomicSwitch
(<B><FONT COLOR="#A020F0">fn</FONT></B> (t: 'a S.thread) =>
<B><FONT COLOR="#0000FF">let</FONT></B>
<B><FONT COLOR="#A020F0">val</FONT></B> (transId, cleanUp) = TransID.mkFlg ()
<B><FONT COLOR="#A020F0">fun</FONT></B> log blockFns: S.rdy_thread =
<B><FONT COLOR="#A020F0">case</FONT></B> blockFns <B><FONT COLOR="#A020F0">of</FONT></B>
[] => S.next ()
| blockFn::blockFns =>
(S.prep o S.new)
(<B><FONT COLOR="#A020F0">fn</FONT></B> _ => <B><FONT COLOR="#A020F0">fn</FONT></B> () =>
<B><FONT COLOR="#0000FF">let</FONT></B>
<B><FONT COLOR="#A020F0">val</FONT></B> () = S.atomicBegin ()
<B><FONT COLOR="#A020F0">val</FONT></B> x = blockFn {transId = transId,
cleanUp = cleanUp,
next = <B><FONT COLOR="#A020F0">fn</FONT></B> () => log blockFns}
<B><FONT COLOR="#0000FF">in</FONT></B> S.switch(<B><FONT COLOR="#A020F0">fn</FONT></B> _ => S.prepVal (t, x))
<B><FONT COLOR="#0000FF">end</FONT></B>)
<B><FONT COLOR="#0000FF">in</FONT></B>
log blockFns
<B><FONT COLOR="#0000FF">end</FONT></B>)
</PRE>
</p>
<p>
To avoid the nested switch-es, I run the <tt>blockFn</tt> in it's own thread, whose only purpose is to return to the synchronization point. This corresponds to the <tt>throw (blockFn {...})</tt> in the SML/NJ implementation. I'm worried that this implementation might be a little expensive, starting a new thread for each blocked event (when there are only multiple blocked events in a synchronization group). But, I don't see another way of implementing this behavior in the MLton thread model.
</p>
<p>
Note that another way of thinking about what is going on is to consider each <tt>blockFn</tt> as prepending a different set of actions to the thread <tt>t</tt>. It might be possible to give a <tt>MLton.Thread.unsafePrepend</tt>.
</p>
<pre class=code>
<B><FONT COLOR="#A020F0">fun</FONT></B> unsafePrepend (T r: 'a t, f: 'b -> 'a): 'b t =
<B><FONT COLOR="#A020F0">let</FONT></B>
<B><FONT COLOR="#A020F0">val</FONT></B> t =
<B><FONT COLOR="#A020F0">case</FONT></B> !r <B><FONT COLOR="#A020F0">of</FONT></B>
Dead => <B><FONT COLOR="#A020F0">raise</FONT></B> Fail <B><FONT COLOR="#BC8F8F">"prepend to a Dead thread"</FONT></B>
| New g => New (g o f)
| Paused (g, t) => Paused (<B><FONT COLOR="#A020F0">fn</FONT></B> h => g (f o h), t)
<B><FONT COLOR="#A020F0">in</FONT></B> <I><FONT COLOR="#B22222">(* r := Dead; *)</FONT></I>
T (ref t)
<B><FONT COLOR="#A020F0">end</FONT></B>
</PRE>
<p>
</p>
<p>
I have commented out the <tt>r := Dead</tt>, which would allow multiple prepends to the same thread (i.e., not destroying the original thread in the process). Of course, only one of the threads could be run: if the original thread were in the <tt>Paused</tt> state, then multiple threads would share the underlying runtime/primitive thread. Now, this matches the "one-shot" nature of CML continuations/threads, but I'm not comfortable with extending <tt>MLton.Thread</tt> with such an unsafe operation.
</p>
<p>
Other than this complication with blocking multiple base events, the port was quite routine. (As a very pleasant surprise, the CML implementation in SML/NJ doesn't use any SML/NJ-isms.) There is a slight difference in the way in which critical sections are handled in SML/NJ and MLton; since <tt>MLton.Thread.switch</tt> _always_ leaves a critical section, it is sometimes necessary to add additional <tt>atomicBegin</tt>/<tt>End</tt>s to ensure that we remain in a critical section after a thread switch.
</p>
<p>
While looking at virtually every file in the core CML implementation, I took the liberty of simplifying things where it seemed possible; in terms of style, the implementation is about half-way between Reppy's original and MLton's.
</p>
<p>
Some changes of note:
</p>
<ul>
<li>
<p>
<tt>util/</tt> contains all pertinent data-structures: (functional and imperative) queues, (functional) priority queues. Hence, it should be easier to switch in more efficient or real-time implementations.
</p>
</li>
<li class="gap">
<p>
<tt>core-cml/scheduler.sml</tt>: in both implementations, this is where most of the interesting action takes place. I've made the connection between <tt>MLton.Thread.t</tt>s and <tt>ThreadId.thread_id</tt>s more abstract than it is in the SML/NJ implementation, and encapsulated all of the <tt>MLton.Thread</tt> operations in this module.
</p>
</li>
<li class="gap">
<p>
eliminated all of the "by hand" inlining
</p>
</li>
</ul>
<h2 id="head-531c9b98f5b5319fbd454f61787c5cafc2812a55">Future Extensions</h2>
<p>
The CML documentation says the following:
<pre>CML.joinEvt: thread_id -> unit event
</pre>
</p>
<ul>
<tt>joinEvt tid</tt>
<ul>
creates an event value for synchronizing on the termination of the thread with the ID tid. There are three ways that a thread may terminate: the function that was passed to spawn (or spawnc) may return; it may call the exit function, or it may have an uncaught exception. Note that <tt>joinEvt</tt> does not distinguish between these cases; it also does not become enabled if the named thread deadlocks (even if it is garbage collected).
</ul>
</ul>
<p>
I believe that the <tt>MLton.Finalizable</tt> might be able to relax that last restriction. Upon the creation of a <tt>'a Scheduler.thread</tt>, we could attach a finalizer to the underlying <tt>'a MLton.Thread.t</tt> that enables the <tt>joinEvt</tt> (in the associated <tt>ThreadID.thread_id</tt>) when the <tt>'a MLton.Thread.t</tt> becomes unreachable.
</p>
<p>
I don't know why CML doesn't have
</p>
<pre>CML.kill: thread_id -> unit
</pre><p>
which has a fairly simple implementation -- setting a kill flag in the <tt>thread_id</tt> and adjusting the scheduler to discard any killed threads that it takes off the ready queue. The fairness of the scheduler ensures that a killed thread will eventually be discarded. The semantics are little murky for blocked threads that are killed, though. For example, consider a thread blocked on <tt>SyncVar.mTake mv</tt> and a thread blocked on <tt>SyncVar.mGet mv</tt>. If the first thread is killed while blocked, and a third thread does <tt>SyncVar.mPut (mv, x)</tt>, then we might expect that we'll enable the second thread, and never the first. But, when only the ready queue is able to discard killed threads, then the <tt>SyncVar.mPut</tt> could enable the first thread (putting it on the ready queue, from which it will be discarded) and leave the second thread blocked. We could solve this by adjusting the <tt>TransID.trans_id types</tt> and the "cleaner" functions to look for both canceled transactions and transactions on killed threads.
</p>
<p>
John Reppy says that <a href = "References#MarlowEtAl01">MarlowEtAl01</a> and <a href = "References#FlattFindler04">FlattFindler04</a> explain why <tt>CML.kill</tt> would be a bad idea.
</p>
<p>
Between <tt>CML.timeOutEvt</tt> and <tt>CML.kill</tt>, one could give an efficient solution to the recent comp.lang.ml post about terminating a function that doesn't complete in a given time.
<pre class=code>
<B><FONT COLOR="#A020F0">fun</FONT></B> timeOut (f: unit -> 'a, t: Time.time): 'a option =
<B><FONT COLOR="#A020F0">let</FONT></B>
<B><FONT COLOR="#A020F0">val</FONT></B> iv = SyncVar.iVar ()
<B><FONT COLOR="#A020F0">val</FONT></B> tid = CML.spawn (<B><FONT COLOR="#A020F0">fn</FONT></B> () => SyncVar.iPut (iv, f ()))
<B><FONT COLOR="#A020F0">in</FONT></B>
CML.select
[CML.wrap (CML.timeOutEvt t, <B><FONT COLOR="#A020F0">fn</FONT></B> () => (CML.kill tid; NONE)),
CML.wrap (SyncVar.iGetEvt iv, <B><FONT COLOR="#A020F0">fn</FONT></B> x => SOME x)]
<B><FONT COLOR="#A020F0">end</FONT></B>
</PRE>
</p>
<h2 id="head-77083a6d602cd5c98f0ccc3e2f7584e93ad1f737">Space Safety</h2>
<p>
There are some CML related posts on the MLton mailing list
</p>
<ul>
<a class="external" href="http://mlton.org/pipermail/mlton/2004-May/"><img src="moin-www.png" alt="[WWW]" height="11" width="11">http://mlton.org/pipermail/mlton/2004-May/</a>
</ul>
<p>
that discuss concerns that SML/NJ's implementation is not space efficient, because multi-shot continuations can be held indefinitely on event queues. MLton is better off because of the one-shot nature -- when an event enables a thread, all other copies of the thread waiting in other event queues get turned into dead threads (of zero size).
</p>
</div>
<p>
<hr>
Last edited on 2007-08-15 22:05:31 by <span title="fenrir.uchicago.edu"><a href="MatthewFluet">MatthewFluet</a></span>.
</body></html>
|