For more details, see: izy-proxy-help-article
You need npm > 3.10.6. The npm install behavior is different in earlier versions. The tool requires that all the node dependencies be installed in flat node_modules subdirectory.
If you happen to use an older version of npm, the work around is shown below:`
mkdir ~/srv/
npm init -f; npm install --save izy-proxy; mkdir -p node_modules/configs/izy-proxy;
`
If you are using npm < 3.10.6, you must also do:
`
`
`
pm2 stop 1
pm2 start 2.... update ~/izyware/ ...
`
Notice that you should put your containers behind a load balancer (i.e. AWS elb) to avoid ending up with broken connections.
The compose tool allows defining and managing multi-service applications. You use a compose JSON file to configure all the nano services and then you can create, configure, start and stop the nano services from your application.
The service lifecycle can be managed by
service?start
This is the equivalent of
['//inline/service?compose', queryObject.serviceComposeId],
[//service/${queryObject.service}?onConfig
]
For better control, a mini services architecture will use the following:
User interface component will setup the service compose and subscribe to the services it is interested in:
['//inline/service?compose', serviceCompose],
['service.subscribeTo', 'shell'],...
modtask.onservice = function(queryObject) {const { serviceName, notification } = queryObject;
// ....}
After the user enters the values and picks config the user interface component will call the service:
[//service/shell?onConfig
, { cmd, verbose: true }]
On the service side, when the interface is called, the framework will inject datastreamMonitor and the service instance singleton available to the service implementation:
modtask.onConfig = function(queryObject, cb, context) {
const {datastreamMonitor
} = modtask;const {
service,monitoringConfig
} = context; ... datastreamMonitor.log({ msg: {action: 'updateConfig',
updates}});
And service can then notifiy the subscribers by:
chain(['//inline/service?notifySubscribers', {
source: modtask.myname,notification: {
id: 'speakerAudioContextIsReadyNotification'}
}]);The event management layer allows communication across nano services using a publish subscribe pattern which leads to great flexibility and scalability.
The monitoring layer allows monitoring and logging activity for the services. It provides useful features for streaming services that can measure streaming parameters (flow through, frequency, etc.). datastreamMonitor.log will be the primary interface.
For a referece implementation refer to the portforwarding sample in the apps directory and the white paper avilable in your enterprise dashboard. You may also refer to the open source tools available in automation-desktop. For detailed architectural considerations and metrics schema definition and customization refer to Unified Metrics Stream Pipeline in the help docs.
`
cp nodemodules/izy-proxy/samples/tcpserverproductionconfig.js nodemodules/configs/izy-proxy/tcpserver.js
`
Note that some configurations may require additional local packages. For example pkgloader/dbnode, depends on components/pkgman/dbnode being locally present. Make sure to include the relevant components locally and add a search path reference under modtask/config/kernel/extstores/file.js to the appropriate location.
To ensure that the package import configuration is setup correctly for service and the credentials are valid, try importing a the service handler package from the cli, using your config file:
`
node cli.js method chain chain.action "//inline/myservice:handler" chain.queryObject.success true chain.relConfigFile ../configs/izy-proxy/tcpserver
`
To workaround this problem, consider manipulating moduleSearchPaths:
process.izyProxyCliModuleSearchPaths
.modtask.Kernel.rootModule.usermodule.moduleSearchPaths = [
'Path1','Path2'
].concat(modtask.Kernel.rootModule.usermodule.moduleSearchPaths); Notice that for legacy build system using./ldo.sh
command syntax, the search paths may be customized inside thirdparty/config/kernel/extstores/file.js
.
`
cd node_modules/izy-proxy
node tcpserver/app.js (or if you are using pm2, do pm2 start tcpserver/app.js)`
Make sure the the cwd for the server process is set to the location for the izy-proxy installation. This is important because the cwd is used in locating plugin, thirdparty modules and the configuration.
For testing your deployment, you can overwrite the default config options by launching the component in the interactive mode
`
node cli.js method tcpserver chainProcessorConfig.runpkg.verbose true
`
After the server is running, the following should work:
`
"host": "localhost:20080",
"url": "/izyproxystatus","subsystem": "server"
}`
The serverObjs variable allows the plug-in modules to handle CORS. Note that to handle CORS a plug-in must:
You should customize the Access-Control-Allow-XXX headers for your own business needs.
Make sure to edit ../configs/izy-proxy/taskrunner.js. At a minimum the following values need to be set:
`
node taskrunner/app.js (or if you are using pm2, do pm2 start node taskrunner/app.js)
`
For testing your deployment, you can overwrite the default config options by launching the taskrunner in the interactive mode
`
node cli.js method taskrunner taskrunnerProcessor.verbose true taskrunnerProcessor.readonly true taskrunnerProcessor.loopMode false
`
See the configuration reference for taskrunner for the list of command line options.
[taskParameters]
where taskParameters is defined in the Izy Cloud Dashboard. This will allow for flexibility in terms of what/where to run the tasks:
To run mypackage:mymodule as JSONIO API Interface (what), inline (transport), inside the context of the izy-proxy process (where), simply set the taskParameters to:
///mypackage:mymodule
To run mypackage:mymodule as JSONIO API Interface (what), over HTTPS (transport), inside the context server.com (where), simply set the taskParameters to:
//myserver.com/mypackage:mymodule?method
To run mypackage:mymodule as a chain (what), inline (transport), inside the context of the izy-proxy process (where), simply set the taskParameters to:
//inline/mypackage:mymodule?method
We recommend that you enable clustering for better reliability for your mission critical apps.
You may enable clustering by either doing:
`
`
If you specify the cluster configuration in your config file, the default startup mode would automatically become 'master'.
`
healthCheckIntervalSeconds: 2,
slaveMaxAllowedMemoryMB: 400,slaveTTLSeconds: 1000,
verbose: {healthCheck: true,
masterSlaveMessages: false,restartSlave: true
}}
`
`
`
Test chains, runpkg and API plug-in -- require localhost connection.
`
npm run test all
`
`
/ memory management tests /
chrome://inspect/node --inspect tcpserver/app.js standalone
node cli.js call "test/performance/memtest?test"/ running the socket handler module directly without going through the TCP/IP stack /
node cli.js method socket socket.path izy-pop3/proxypkg:directdb socket.testmod izy-pop3/proxypkg/test/android socket.verbose.pipe.s1data true socket.verbose.pipe.s2data true socket.verbose.pipe.teardown true socket.verbose.mock false socket.user user socket.pass 'password!'/ Test remote servers over TCP/SSL /
node cli.js method socket socket.path pop3.izyware.com socket.port 110 socket.testmod izy-pop3/proxypkg/test/android socket.verbose.pipe.s1data true socket.verbose.pipe.s2data true socket.verbose.pipe.teardown true socket.verbose.mock false socket.user user socket.pass 'password!'`
`
`
You can use the mock libraries in conjuction with the data library to improve your test coverage.
['chain.importProcessor', 'izy-proxy/test/assert:chain'],
['net.httprequest', { url: 'https://myservice/endpoint' }],['assert.value', {
/ optional /verbose: {
testCondition: true,testeeObj: true
},/ optional /
contextName: 'Provide the explanation and contet when an assertion failure is reported',/ Optional /
operators: {success: 'equal',
status: 'equal'// reason: 'contain'
// counter: 'greater than'// str2: 'notcontain'
},success: true,
status: 200}],
the system will deserialize the api.queryObject.* into a JSON queryObject that gets passed into the JSONIO api handlers.
//inline/izy-proxy/test:cloudwatch/base
`
node cli.js call [//cloud/]
node cli.js method taskrunner ...
node cli.js method chain ...`
Of all the methods, the chain is the most powerful, because it will allow you to any chain command (remote, local, etc.) while composing an arbitrary JSON queryObject
`
will show
{ success: 'true', testKey: 'testValue' }
`
of course you could use it for remote connections, i.e.
`
chain.queryObject.email xxx@yourizywaredomain.com
`
If your subscription enables access to Automation Projects studio, you can recreate these launch configurations in the UI and use the console for interactive exploration of your app functionality. Logging, monitoring and automation will also be supported. You must select the authentication strategy that best suits your use case (Open, hoAuth, SAML, etc.) even in the minimal service configuration option. For example:
require('izy-proxy/server').run({port: { http: 17800 },
plugins: [{name: 'apigateway',
moduleSearchPaths: [dirname + '/'],cli: 'oAuth2'
}]});
flatToJSON
to convert flat serialized command line strings to an in memory JSON key/values using the standard modtask flatToJSON method. If you need to parse the string values into objects, you should use expandStringEncodedConfigValues
let cmdline = { 'queryObject.param.key1' : 'val', test: 'json:["domain_manager"]' };
const { flatToJSON, expandStringEncodedConfigValues } = require('izy-proxy/izymodtask');expandStringEncodedConfigValues(flatToJSON(cmdline));
/ will result /{ success: true, data: { queryObject: { param: { key1: 'val' } }, test: ['domain_manager'] } }
the serialized syntax for expandStringEncodedConfigValues is consistent with data URIs which was defined in Request for Comments (RFC) 2397.
Create a plugin by cloning the default
subfolder under the plugin
directory. You must also register the plugin in the config file.
To automatically test the plug-ins below, send a GET request and expect 200 status code.
The plug-ins may use the sessionObjects to share states and parsed information across the canHandle and handle stages. This will allow for writing high performance plug-ins.
To test this plug-in, try:
`
`
`
`
There is an optional plug-in called logging. If you would like to remotely view the internal server logs (or view the entries from your your dashboard), you must enable it in the config:
`
name: 'logging',
reloadPerRequest: false,ipwhitelist: ['1.2.3.4']
}`
Due to security requirements, you must whitelist the IP address in order to access the logging feature.
While its possible to do ['chain.importProcessor', 'lib/monitoring', {}]
inside each handler, it is recommended that to define the monitoring and logging strategy per service instance. Notice that the following component lifecycles:
Using the strategy above, the service entrypoints, should be launched using a newChain, and the methodCallContextObjectsProvidedByChain
and monitoringConfig
need to be specified.
This will guarantee that the proper objects are instantiated and made available within the context of each method call during the service lifetime. The service instantiation cli uses this technique to setup the property logging per service end-point. See service code samples sections for examples.
`
plugins: [
{name: 'socket',
customRequestHandler: true,items: [
{ port: 20110, handlerPath: ':test/socket' }]
chainProcessorConfig: chainProcessorConfig}]
}`
See the testing instructions above (under Testing
) for howto test the service handler directly from the command line. The following verbose flags (and the default values) are available
`
writes: false,
ondata: false,connect: false,
terminate: true,error: true,
close: false,end: false,
datacopy: false,detectStandardOK: false,
authenticate: falsemock: false
`
As the first step, setting the enableQOSMetrics
flag to true for the streamproto1 object, will trigger a call to the qosWriter method everytime an audio packet is recieved. A QOSMetrics
object will be generated and passed on to the the qosWriter
method. The object will have the following properties:
A typical implementation may be found in service/audiooutput
:
(xcast) audio source => IzySocketWriterNode => metadata packet(getMetaDataStrFunction) + audio packet => (peer) => IzySocketReaderNode => audio sink
(xcast) updateQOSForUser <= socket.on('data') <= Stringify(QOSMetrics) (peer) <= enableQOS?qosWriterQOSMetrics
, and write it back to the xcastQOSSocketWriterData
storeLib variable.Eventually, service/mixeradmin
will push the QOSSocketWriterData
back to the admin using socketWriterNode.getMetaDataStrFunction
:
QOSSocketWriterData
alongside other xcast state objects and serialize them.storeLib.set('readMetadataPacket')
value which is then consumed by dashboard/userinput/qos/api
to populate the QOS dashboard.If you wish to access izy modules from the file system, you may customize the module path resolution rulesets defined in:
`
`
Please refer to the comments in the file to understand how to reference external modules.
Your application units of functionality may be componentized either as a chain action (CA) or as a JSONIO end-point. The former would enforce execution of your application code in context while sharing the objects passed in and out while the latter would serialize the objects and would execute your application code in an isolated environment. The following considerations are important when making a decision about the component type:
Note that when setting up a new chain special attention must be paid to chainAttachedModule and context object. context object gets accessed when doing $chain.get, $chain.set. In some settings (for example FE components) it may make sense to set both the context and chainAttachedModule to the same object (i.e. modui) while in some other contexts it does not make sense to do so.
CAs are implemented via a chain handler module (CHM) while JSONIO end-point are implemented as a JSONIO module (JM). JMs always proffer the doChain method with its context object set to the JM. On the other hand, CHMs do not have the standard doChain processor as JSONIO modules do because they can share context and be called from other chains. In order to process a chain inside a CHM, the $chain parameter proffers the following functions:
['//service/endpoint', { data }]
);$chain.newChainForProcessor(processorModule, next, {}, [
['//service/endpoint', { data }]]);
To run chain actions, several options are available:
modtask.doChain
is enabled inside the current context module: this is the most common usecase for UI components. The advantage of using this option is that any non-success outcome will be captured by the modtask which can be routed to the UI error div. This will allow implementation of error handling at component level (not application level) without writing extra code. It is available inside async functions as well, but its usage is discouraged. If your service monitoring config has warnAsyncDoChainUsage enabled you will recieve a warning for using it.['newChain', ...
CA can be used with modtask.doChain
.* const run = require('izy-proxy/asyncio')(modtask|module).run
: This option will only run a single CA (no chain arrays). The non-success outcome will be thrown (not captured by modtask). when using module, the relative paths for CA references will not work.
modtask.newChainAsync
: similar to modtask.doChain
, except that the non-success outcome will be thrown. Note that this option is only available inside async functions.
['net.httprequest', {
verbose: {logRequest: true,
logResponse: true,logEvents: true
},url: '',
method: 'POST',headers: {},
body: 'string',/ when set to false, it will allow https connections to self-signed certificates /
rejectUnauthorized: false,/ when an error would have triggered an unsuccessful outcome, setting this would return a success outcome with outcome.status set to myNumbericalValue and responseText set to outcome.reason /
resolveErrorAsStatusCode: 'myNumericalValue',/ optional /
responseType: 'text',/ optional: if you already have a tcp or web socket that you would like to use for multiplexing use requestOverExistingConnection by providing the connectionId /
connectionId}]
Notice that when settings up the runpkg chain processor, one of the configuration parameters is sessionMod which will be used to carry the authorization info when packages are run:
* When calls use the inline signature, the { ownerId, ownerType } is 'trusted' and used from the sessionMod
* When calls are over http (even local environment), the authorizationToken will be used from the sessionMod and used as the bearer token for the HTTP authorization method.The difference between the context object for non JSONIO calls (i.e. plugin/http/handle newChain method) and the context object for JSONIO calls that gets setup inside the chain gets constructed for //inline/ or //cloud/ calls could sometimes get confused. It is worth remembering that non JSONIO calls (i.e. http handler) do not have built in authentication schemas and the implemtation is up to the user.
We will review the legacy and current implementations and call signatures below
The following schema is supported
///pkg:module?method&forcepackagereload=1&methodnotfoundstatus=statuscode
//service/pkg:path?method
to maintaine backward compatibility with V2, the following procedure is followed:
function(queryObject, cb, context)
context: {
queryObject: queryObject,context: context
} the values can easily be access via $chain.get('queryObject'), ...`
modtask.apiInterfaceType = 'jsonio';
modtask.processQueries = function(queryObject, cb, context) {modtask.doChain([
['nop'],['frame_getnode', modtask],
['frame_importpkgs', ['sql/jsonnode']],['frame_importpkgs', ['ui/node']],
['frame_importpkgs', ['encodedecode']],function(_push) {
...}
`
This can be consumed from anywhere in the cloud via chaining, i.e:
`
function(_push) {
console.log(modtask.outcome);}
`
localhost.izyware.com 127.0.0.1
intranetservice1.izyware.com 10.0.0.231....
../configs/izy-proxy/certificates/privatekey.pem
{ handlerMod: 'apps/bridge/...', domain: 'mydomain', config: {
transportAgent: 'socks5://ip:port',verbose: {
transportAgent: true}
}}
To make cloning and reusability easier, it is further recommended that the bridge functionality be implemented as a subpackage using the following scheme:
app_name/bridge/api
For example, the http bridge app included with izy-proxy uses:
apps/bridge/api.js
This would result in the route:
https://dnsentry.com/apigateway/%3Aapps/bridge%3Aapi
Which can then be further used to perform configuration tasks from the CLI, Dashboard Interface or more generally a JSON API call:
npm run apps/brige/api?setconfig queryObject.agent socks5://ip:port
izy-proxy supports both a CLI (command line interface) and a referencable library. There are a few typical usecases for using those.
You launch your components by:
`
`
if you would rather, use npm run, add this to the scripts section of package.json file:
`
`
this way, you can just call the action by doing:
`
`
If you need to make these available from the terminal from anywhere, place the call clause inside a seperate js file and link it to the package.json bin property. You can use the link command (or install -g), which will symlink the scripts in under the bin section to the prefix directory
npm prefix -g
Create a file as the global entry point to your package:
chmod +x bin/izy.js
And finally reference it in the bin section of package.json. It can be used by npm install or by linking it:
npm link
If npm link does not work, you can manually link
ln -s npm prefix -g
/bin/izy.xxx
pwd`/bin/izy.js
cd myApp;npm run theAction ...
You could do this from anywere
izy.myApp theAction ....
See the samples folder for more information.
Use the loadById for the source data schema:
const proxyLib = require('izy-proxy').basePath;
// json
chain([//inline/${proxyLib}/json?loadById
,{ id }]);
// yaml
chain([//inline/${proxyLib}/yaml?loadById
,{ id }]);
// xml
chain([//inline/${proxyLib}/xml?loadById
,{ id }]);
You can consume these in chains by calling the format modules direcly:
const proxyLib = require('izy-proxy').basePath;
chain([//inline/${proxyLib}/format?serialize
, {
data,
format: 'tsv'}]);
`
cp ./node_modules/izy-proxy/modtask/config/kernel/extstores/file.js modtask/config/kernel/extstores/
`
You should then edit the modtask/config/kernel/extstores/file.js and add the hard coded and relative paths for the location of your repositories.
You can also use the following chain command for adding module search paths:
`
modtask.doChain([
['chain.moduleSearchPathAdd', repoRoot],chain => console.log(chain.get('outcome'))
];`
In this case, call the newChain method:
`
chainItems: [
['chain.importProcessor', 'lib/chain/sql', {config: {}
}],['//inline/module?fn', {}]
],callerContextModule: module
}, outcome => console.log(outcome.reason));
});/ or more compact /
require('izy-proxy')(module).series([
[]], failoutcome => {});
`
To provide easier interoperability with newer versions of Python that support Asynchronous I/O you can use the async version. See python-asyncio for details:
`
let { status } = await run('net.httprequest', {
url: 'http://bad.url.o.r.g',resolveErrorAsStatusCode: -1
});console.log(status);
`
`
verbose: {
WARNING: true,INFO: true,
ERROR: true},
port: {
http: 20080},
plugins: [{
name: 'http',domains: [
{ handlerMod: 'mypath/myhandler', domain: 'testdomain.com' }],
chainProcessorConfig: {import: {
pkgloadermodname: 'samples/pkgloader/izycloud'}
}}
]});
`
app
object and defines routing by app.METHOD(PATH, HANDLER)
syntax.
const moduleSeachPaths = null; / optional, defaults to [${
dirname}/]; /
app.all('/path/to/module', izyhandle);
const moduleSeachPaths = null; / optional, defaults to [${
dirname}/]; /
server.route({
method: 'GET',path: '/',
handler: izyhandle});
AWS > Lambda > Runtime Settings > Handler
node_modules/izy-proxy/frameworks/aws/lambda.handler
AWS > Lambda > Configuration > Environment variables
as AWSLAMBDACLI_STRING
. For example if you are using the following to launch your component from the CLI:npm run lib/mytest?mytask queryObject.p1 hello
then, create the following environment variable
AWSLAMBDACLI_STRING=lib/mytest?mytask queryObject.p1 hello
for more details, visit izyware
* an optional improvement would be to use /tmp folder cache to generate the function to skip the first raw evaluation
* this will enable distributing the pkg server across a cluster.
* make 2 versions and 1 for V2?node node_modules/izy-proxy/cli.js callpretty test/test
/ fails -- needs to work. works when called from izy-proxy root /
['chain.importProcessor', 'izy-proxy/test/assert:chain'], / works /['chain.importProcessor', 'test/assert:chain'],
* this scenario can happen for example when it calls directly from a node module
require('izy-proxy').newChain({
chainItems: [['//inline/?MyFN'],
chain => {}
],['chain.importProcessor', 'izywaretoolbar/5/extension/api:chainprocessor']
will become
['chain.importProcessor', 'rel:../extension/api:chainprocessor']
* some of the sql access, session management for legacy systems gets routed
* the session management relies on some of the data structure in legacy /apps/izyware/index* net.http needs to allow for passing a transport agent
* support HTTP, HTTPS, and SOCKS options* for browser context require toolbar.
* domains need to become part of the context
* APIs is not domain aware. need to add that* perhaps after we have added domain to context add a config option to the api-plugin
* the API implementor would still be able to do this but might be simpler to filter at the service level* pass the IP address and headers metadata to the { sessionObjs } already present in HTTP handler as well as the RAW APIS
* does it make sense to consolidate the model for RAW APIs and HTTP handlers?* note that in the node environment, req.connection.remoteAddress return 127.0.0.1 if you are behind a proxy
* for customers using ngin-x proxy, utilize request.headers['X-Forwarded-For'] to pass this down* Is this a valid point? may be not: this information will NOT be avilable in JSON/IO apis. The point of JSON/IO apis is transport independence and doing this will introduce the trainsport specific concept into it.
/ only happens when called from frameworks /
req.path = '/path/to/mymodule'; require('../../index').newChain({chainItems: [
['//inline/' + req.path, (req.method.toUpperCase() === 'POST' ? req.body : req.query)],],
chainProcessorConfig: {moduleSearchPaths
}}, outcome => {
delete outcome.callstack;delete outcome.callstackStr;
res(JSON.stringify(outcome));});
'socket://{connectionId}/rel:peer?handshake'
apps/accountmanager/5/sessionfeature:chainprocessor
implements locally['chain.importProcessor', 'apps/accountmanager/5/sessionfeature:chainprocessor']
['features=session.pkgflags']* turns out the failure is coming from modtask.ldmod('kernel\\selectors').objectExist line in runpkg
* use datastreamMonitor for monitoring
* similar to OS level support for services changeServiceConfig(windows), reload (unix)
* similar pattern to docker compose for compose* This will start the service, if not started on first interaction
* the service "uniqness" is resolved by the name it has in the compose file* support for observables to the service context
* customers have requested that logging tools for streams be provided at the base server level
* useful for granular logging control and live streaming properties (data frequency, QOS) monitoring* service reference schema now supporting [https:]//localhost[:port]/${path}/${action}
//localhost/${path}/${action}
* returns null when malformed json is present
* customer feedback needed to support modern XMLHttpRequest options
* allows making https requests to end-points with self-signed certificates
* useful for testing* this will enable handling errors gracefully for event driven apps without throwing the main chain.
* This will be more consistent with the package launcher // schema.
* This is important to avoid collusion where the package is being launched when installed as dependency inside node_modules and the parent context also has a chain module with the same name:/ ambigious -- chain could be found in a lot of different places /
['chain.importProcessor', 'chain'] / unambigious -- chain must be in the same path as the current module// would be like modtask.ldmod('kernel/path').rel('chain') /
['chain.importProcessor', 'rel:chain']* customers had reporting module path resolution conflict for rel:build with internal build components.
* this will support a more compact syntax
* concurrent threads might be able to get access to execution contexts when common module address space is used
* enables object construction delegation to the module load phase
* this will allow module consumption via require and ldmod
* better performance
* added package running space isolation* when id is an object it will parse and verify
* when id is a service address, the service gets contacted.* consume moduleSearchPaths to locate mod?fn relative to app.js (see below)
* app.js belowcurl --header "Content-Type: application/json" -X POST "http://localhost:17800/apigateway/mod?fn" -d '{}'
require('izy-proxy/server').run({
port: { http: 17800 },plugins: [{
name: 'apigateway',moduleSearchPaths: [dirname + '/']
}]});
* Decouples the hosting environment from the apps/tasks/_ packages
* add http chain net.httprequest
* now supporting ///pkg:module?method&forcepackagereload=1&methodnotfoundstatus=statuscode
* allow probing for existance of methods* the package names are used extensively for access control and customization. therefore we cannot assign arbitrary names to the packages
* this scheme is consistent with call signature for jsonio calls
node cli.js method chain chain.action '//../...' chain.queryObject.provisioningConfig.type json:[\"domain_manager\"] chain.queryObject.provisioningConfig.userids json:[\"@userlastinsertid\"]
* standard signature for handling errors, non 200s, etc.
* single lib working in all scripting environments* This will work in environments that use older encodings on some areas of the stack (i.e. the HTTP server or Scripting)
removes the dependency to ui/node/ libraries* added tests to ensure that parameter types are being serialized and deserlized correctly across context boundaries
* used to fail before even trying to check if modules were present locally
* needed in settings where making doChain available for pkgruns (example rawhttp APIs, etc.)
* This will make it harder for customer to create cloud components that can be deployed reliably to different cloud environment.
* updated the API system use the same parser for uri that runpkg offers* moved the rawhttp, mod.handle interface with chain enabled into its own module within the api plug-in
* made the certificate paths optional
* this will allo non connected apps to work local
* when auth is not defined, the cloud import will not hit the izycloud
* this will force a chain.deportPkgs before importpkgs
* it will allow flexibility for customizing how the remote tasks can be updated* it will work well with importpkgs and caching layer there
* INLINESTORE, Minicore, Kernel is scoped per rootmode (thus so will ldonce)* the importCache is scoped per import processor
* using socketIds to communicate between the processor and clients. this should allow for embedded socket applications
* added socket.mock ajd socket.pipe features* ditch doChain in favor of newChainForProcessor
* newChainForProcessor: while handling an item in a $chain will run a new chain in a new context, and on failure will exit the $chain. when success it will do next (move on to the next item in the $chain). The new running context will be tied to the processor module and the resulting callstack in case of a failure will trace to the processor.
* newChainForModule: runs a chain in a new running context tied to the given module and context object. The new running context will be tied to the processor module and the resulting callstack in case of a failure will trace to the module. The callback will have to decide whether to exit the $chain on failure or just report it as an output.* this would 'share' the running context (callbacks and context) across $chain and the new chain which will lead to maintaince issues
* useful for calling internal methods in a module from a chain without writing code
* '//inline/' call pattern would either call processQueries or run the entire module as chain* '//inline/?actionName' will run modtask.actions.actionName(...)
* the 'queryObject' and 'context' will be passed as chain keys
* unified running JSONIO type modules with chains enabled by adding runJSONIOModuleInlineWithChainFeature to pkg/run* all other subsystem, including the APIs should be using this
* //chain/ was sharing context across chains which can lead to problems
* if the system level is successful, $chain.get('outcome') will be reported
* without this it would require everyone to add a ['return'] to the end of every chain or add ['ROF'] after every call to pump the outcome to the CB which is annoying. It will also make error reporting less useful because the call stack will be at ['ROF'] or ['return'] instead of the previous expression that caused it.* this will provide valuable information when reporting failures and construcing stack traces
* this will ensure that the execution contexts are seperate and the data will not be leaked or corrupted
* will allow for running packages in parallel without the fear of packages stepping on each others dataError Handling:
* locating the error is typically the hardest thing to do which this feature will help accomplish* error reporting should give back modules, and where in the modules, and the path (stack trace) that got things there
* type of errors* syntax error inside a dynamic function
* dynamic/statuc chain calling invalid commands, etc.* source code errors when loading
['//izyware.com/rel:modname', {}, mod]
* The token might be used for making HTTP based (not inline) calls
* Updated the OPTIONS response to indicate that authorization headers are accepted when making cross domain calls.
* this will enable an automated task for doing tests on the live service from the IzyCloud enterprise dashboard
* The following plug-ins are covered* APIs
* http* circus
* consolidated from toolbar and other libraries
* If this is not turned on, the module updates will only be picked up on taskrestarts which is not desirable.
* added apiExecutionContext
to taskrunnerProcessor config for remote and local configurations and deployments.
ui/ide:cloudstorage/api
with auth token to do package loading inside the Chains (i.e. when using the taskrunnder)
* this is important for processing data sets:
['newChain', {
chainName: modtask.myname + '.loop',chainItems: [
function(chain) {if (offset >= maxOffset) {
return chain([['log', 'ran ' + maxOffset + ' sets of queries'],
['set', 'lastOutcome', {data: queryResults,
success: true}],
['return']]);
}chain([
['log', (offset+1) + ' of ' + maxOffset]]);
},['ROF'],
['replay']]
* process seqs.onNewTask and capture the outcome if error without CONTAMINATING itself
* The task itself will have a chain seqs.onNewTask (see izy-proxy/taskrunner/main.js)pre import relevant processors (task.)
* This will allow us to have nested chains, and subchains, etc.
* being able to doChain internally is important because the chain handlers may need to utilized other commands inside the current context.
* having newChain is also important to be able to isolate a set of commands (for example the task manager running an external module).* virtually, doChain/newChain will allow differentiation on the following:
* will it have a new context or will it share the context with parent?* will it have an outcome handler or will it go to parent?
* this will open up implementation by chain.
* if a handler is redeclared, it will process instead of the parent contextautomation-desktop: https://github.com/izyware/automation-desktop
python-asyncio: https://docs.python.org/3/library/asyncio.htmlnpmjs: https://www.npmjs.com/package/izy-proxy
git: https://github.com/izyware/izy-proxyizyware: https://izyware.com
izy-proxy-help-article: https://izyware.com/help/article/izy-proxy-readme