=encoding utf-8 =head1 Name ngx.semaphore - light thread semaphore for OpenResty/ngx_lua. This Lua module is currently considered experimental. =head1 Synopsis =head2 Synchronizing threads in the same context location = /t { content_by_lua_block { local semaphore = require "ngx.semaphore" local sema = semaphore.new() local function handler() ngx.say("sub thread: waiting on sema...") local ok, err = sema:wait(1) -- wait for a second at most if not ok then ngx.say("sub thread: failed to wait on sema: ", err) else ngx.say("sub thread: waited successfully.") end end local co = ngx.thread.spawn(handler) ngx.say("main thread: sleeping for a little while...") ngx.sleep(0.1) -- wait a bit ngx.say("main thread: posting to sema...") sema:post(1) ngx.say("main thread: end.") } } The example location above produces a response output like this: sub thread: waiting on sema... main thread: sleeping for a little while... main thread: posting to sema... main thread: end. sub thread: waited successfully. =head2 Synchronizing threads in different contexts location = /t { content_by_lua_block { local semaphore = require "ngx.semaphore" local sema = semaphore.new() local outputs = {} local i = 1 local function out(s) outputs[i] = s i = i + 1 end local function handler() out("timer thread: sleeping for a little while...") ngx.sleep(0.1) -- wait a bit out("timer thread: posting on sema...") sema:post(1) end assert(ngx.timer.at(0, handler)) out("main thread: waiting on sema...") local ok, err = sema:wait(1) -- wait for a second at most if not ok then out("main thread: failed to wait on sema: ", err) else out("main thread: waited successfully.") end out("main thread: end.") ngx.say(table.concat(outputs, "\n")) } } The example location above produces a response body like this main thread: waiting on sema... timer thread: sleeping for a little while... timer thread: posting on sema... main thread: waited successfully. main thread: end. The same applies to different request contexts as long as these requests are served by the same nginx worker process. =head1 Description This module provides an efficient semaphore API for the OpenResty/ngx_lua module. With semaphores, "light threads" (created by L, L, and etc.) can synchronize among each other very efficiently without constant polling and sleeping. "Light threads" in different contexts (like in different requests) can share the same semaphore instance as long as these "light threads" reside in the same NGINX worker process and the L directive is turned on (which is the default). For inter-process "light thread" synchronization, it is recommended to use the L library instead (which is a bit less efficient than this semaphore API though). This semaphore API has a pure userland implementation which does not involve any system calls nor block any operating system threads. It works closely with the event model of NGINX without introducing any extra delay. Like other APIs provided by this C library, the LuaJIT FFI feature is required. =head1 Methods =head2 new B I B I, init_worker_by_luaE<42>, set_by_luaE<42>, rewrite_by_luaE<42>, access_by_luaE<42>, content_by_luaE<42>, header_filter_by_luaE<42>, body_filter_by_luaE<42>, log_by_luaE<42>, ngx.timer.E<42>> Creates and returns a new semaphore instance that has C (default to C<0>) resources. For example, local semaphore = require "ngx.semaphore" local sema, err = semaphore.new() if not sema then ngx.say("create semaphore failed: ", err) end Often the semaphore object created is shared on the NGINX worker process by mounting in a custom Lua module, as documented below: https://github.com/openresty/lua-nginx-module#data-sharing-within-an-nginx-worker =head2 post B I B I, init_worker_by_luaE<42>, set_by_luaE<42>, rewrite_by_luaE<42>, access_by_luaE<42>, content_by_luaE<42>, header_filter_by_luaE<42>, body_filter_by_luaE<42>, log_by_luaE<42>, ngx.timer.E<42>> Releases C (default to C<1>) "resources" to the semaphore instance. This will not yield the current running "light thread". At most C "light threads" will be waken up when the current running "light thread" later yields (or terminates). -- typically, we get the semaphore instance from upvalue or globally shared data -- See https://github.com/openresty/lua-nginx-module#data-sharing-within-an-nginx-worker local semaphore = require "ngx.semaphore" local sema = semaphore.new() sema:post(2) -- releases 2 resources =head2 wait B I B I, access_by_luaE<42>, content_by_luaE<42>, ngx.timer.E<42>> Requests a resource from the semaphore instance. Returns C immediately when there is resources available for the current running "light thread". Otherwise the current "light thread" will enter the waiting queue and yield execution. The current "light thread" will be automatically waken up and the C function call will return C when there is resources available for it, or return C and a string describing the error in case of failure (like C<"timeout">). The C argument specifies the maximum time this function call should wait for (in seconds). When the C argument is 0, it means "no wait", that is, when there is no readily available "resources" for the current running "light thread", this C function call returns immediately C and the error string C<"timeout">. You can specify millisecond precision in the timeout value by using floating point numbers like 0.001 (which means 1ms). "Light threads" created by different contexts (like request handlers) can wait on the same semaphore instance without problem. See L for code examples. =head2 count B I B *init_by_luaE<42>, init_worker_by_luaE<42>, set_by_luaE<42>, rewrite_by_luaE<42>, access_by_luaE<42>, content_by_luaE<42>, header_filter_by_luaE<42>, body_filter_by_luaE<42>, log_by_luaE<42>, ngx.timer.E<42>* Returns the number of resources readily available in the C semaphore instance (if any). When the returned number is negative, it means the number of "light threads" waiting on this semaphore. Consider the following example, local semaphore = require "ngx.semaphore" local sema = semaphore.new(0) ngx.say("count: ", sema:count()) -- count: 0 local function handler(id) local ok, err = sema:wait(1) if not ok then ngx.say("err: ", err) else ngx.say("wait success") end end local co1 = ngx.thread.spawn(handler) local co2 = ngx.thread.spawn(handler) ngx.say("count: ", sema:count()) -- count: -2 sema:post(1) ngx.say("count: ", sema:count()) -- count: -1 sema:post(2) ngx.say("count: ", sema:count()) -- count: 1 =head1 Community =head2 English Mailing List The L mailing list is for English speakers. =head2 Chinese Mailing List The L mailing list is for Chinese speakers. =head1 Bugs and Patches Please report bugs or submit patches by =over =item 1. creating a ticket on the L, =item 2. or posting to the L. =back =head1 Author Weixie Cui, Kugou Inc. =head1 Copyright and License This module is licensed under the BSD license. Copyright (C) 2015, by Yichun "agentzh" Zhang, CloudFlare Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: =over =item * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. =back =over =item * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. =back THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. =head1 See Also =over =item * library L =item * the ngx_lua module: https://github.com/openresty/lua-nginx-module =item * OpenResty: http://openresty.org =back