CHips L MINI SHELL

CHips L pro

Current Path : /proc/self/root/proc/2/root/usr/local/ssl/share/doc/upstart-0.6.5/
Upload File :
Current File : //proc/self/root/proc/2/root/usr/local/ssl/share/doc/upstart-0.6.5/TODO

Merge 1216.1.1 from Ubuntu (reload command), add test case.

__abort_msg should be weak or changed



0.6.x series:

 * going into runlevel S from !S, need to stty console
 * going into 6, 0 or 1, need to stty console
 * reset console before S process

 * we should mark fds and sockets close-on-exec when we create them, rather
   than later
 * the fork() pipe should be close-on-exec rather than closing it in the
   child

 * init needs to grow "utmp XXX", which maintains INIT_PROCESS and
   DEAD_PROCESS entries for the given ut_id (3 chars max), used for getty

   For that we'll probably need to support ${..#...} so that we can do
   utmp ${TTY#tty}

 * There's a lot of complicated code that's pretty much duplicated between
   event_pending_handle_jobs(), job_class_start(), job_class_stop(),
   job_class_restart(), job_class_get_instance() and to a lesser extent,
   job_start(), job_stop() and job_restart().  We should make an effort to
   reduce this to common functions, which may become easier as we rationalise
   the behaviour anyway.

 * It may be useful to not just have failed=TRUE/FALSE for job_finished() but a
   more detailed reason about why the job is being unblocked, so the command
   could exit saying "ok", "failed", "stopped by event", etc.

 * It would also be nice, in the case of a job having failed, to be able
   to include the event-like information in the error.  This should be
   possible, it just needs a marshalling function?  (Maybe we should be able
   to generate those)

 * Information about what caused a job to stop (failed information or event)
   should be available to other jobs, and to the job's post-stop script.

 * I'm still not convinced that just clearing blocking is the right approach,
   and still think we need some kind of next_blocking list of things that
   will still be blocked next time around.  Restores some of the older
   behaviour in that "start" will block until stopped, and fail with the
   fact it was stopped.

 * Need to add dependencies to jobs, which are files that must exist before
   the job can be started (because Debian/Ubuntu like to litter config files
   like jobs)

 * Resources, "uses cpu 1.0" -- where cpu has a defined max (default 1.0);
   which state do we keep it in while it's waiting?


Later:

 * Restore serialisation of state between upstart processes, I guess we'll
   use a D-Bus API to do this.  Most sense would be a peer-to-peer D-Bus
   connection so don't need the bus (think initramfs), but we also need
   to pass over the bus connection so we don't drop that.

    - Pass the Event queue first since Jobs refer to it
    - Register each ConfSource, ConfFile, JobClass and Job, setting
      the status of each
    - Join up the Event queue and Job structures

    - What about commends blocked on event emissions or jobs?


Anytime:

 * Iterating through every Job's start and stop events is messy; we should
   have some kind of match lookup table to make it easier.

 * Likewise iterating through all the Jobs to find a pid is messy; we
   should have a lookup table for these too.  Ideally we'd have a JobProcess
   structure combining type, pid and a link to the job -- then all the
   job_process_* functions would just accept those

 * system_setup_console is due for an overhaul as well; especially if
   we want to be able to pass file descriptors in.  Am somewhat tempted
   to add a magic CONSOLE_DEFAULT option which tries fd, logging, null,
   etc.

 * We always want /dev/fd/NNN to be /dev/fd/3, we should have some way
   to instruct process_spawn to do that.

 * We may need to KILL scripts, e.g. post-start; especially when the goal
   changes.  Or perhaps just after a timeout?

 * May need a way to force ignoring of the KILL signal, and assuming that
   a job that won't die really has.

 * Get the LANG environment variable somehow.


Future Features:

 * Roles; services define roles that they can perform ("web server") and
   can be found by their role.  Other jobs could require that a role be
   performed for them to start (creeping into deps here).  Use affinity
   tags to work out which of many services to start.

 * Per-user services; will need to use PAM to set up the session.
   We want to do this for "root-user services" but not for jobs/tasks

 * Passing of file descriptors from event over control socket.

 * Register jobs over the control socket, ideal way is to register some kind
   of automatic source and attach them to that.

 * Temporal events ("15m after startup")

 * Scheduled times ("every day at 3:00")

 * Load average checking, maybe have separate CPU, Network and I/O
   stats?  See also resources.

 * Actions: "reload" and optional replacements for "stop", "start", etc.

   This is mostly just a matter of deciding policy for when they can be run,
   and adding "user processes" onto the end of the job->process array.

 * Alternative script interpreters; "start script python".

   Would be done by making script a char *, and putting the interpreter into
   command?

 * Watershed jobs (this actually might apply to events, since you might
   want to try starting again if a particular event has come in since you
   were last started)

Copyright 2K16 - 2K18 Indonesian Hacker Rulez