3260 papers • 126 benchmarks • 313 datasets
Let T be the task that the service composition needs to accomplish. The task T can be granulated to T 1 , T 2 , T 3 , T 4 , … , T n . i.e. T = {T 1 , T 2 , T 3 , T 4 , … , T n } . For each task T i , a set of service S i = S i 1 , S i 2 , S i 3 , … , S i m is discovered during the service discovery process such that all services in a set S i perform the same function and have the same input and output parameters (See Figure 2). S 1 = {S 11 , S 12 , S 13 , … , S 1m } , S 2 = {S 21 , S 22 , S 23 , … , S 2m } , S 3 = {S 31 , S 32 , S 33 , … , S 3m } , … , S n = {S n 1 , S n 2 , S n 3 , … , S n m } We need to select one service from each set S i in order to compose the big service such that the overall QoS attributes of the big service are optimal. The total number of the possible distinct service composition is n m . Let k be the the number of QoS attributes. Then the total num- ber of comparisons required are kn m . We need at least kn m comparisons to find whether the solution is optimal, thus making the problem as NP-Hard.
(Image credit: Papersgraph)
These leaderboards are used to track progress in service-composition-11
No benchmarks available.
Use these libraries to find service-composition-11 models and implementations
No datasets available.
No subtasks available.
The main contribution is the development of a cognitively-inspired agent-based service composition model focused on bounded rationality rather than optimality, which allows the system to compensate for limited resources by selectively filtering out continuous streams of data.
This work claims that the effort to developing service descriptions, request translations, and service matching could be reduced using unrestricted natural language, allowing both end-users to intuitively express their needs using natural language and service developers to develop services without relying on syntactic/semantic description languages.
Chat4XAI is introduced to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations and its reported benefits include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations.
Adding a benchmark result helps the community track progress.